Skip to main content

Microservices - Setting up PCF Dev for local development environment

In this tutorial we are going to learn how to setup PCF development environment in local desktop/laptop.

Why PCF

PCF is a multi-cloud commercial platform where customers can run enterprise applications. It provides continues delivery, security and customization of your products in cloud. So customers can focus on actualy application development and deployment. They don't need to bother much about preparing the infrastructure etc. For more details you may refer below link.
https://pivotal.io/why-pivotal

What is PCF Dev

PCF Dev is a distribution provided by Pivotal which allows developers to run the full featured cloud foundry so development and debugging becomes easier for the developer. If you create free developer account in PCF then you will get only 2 GB memory to run your applications there while for a microservice application it may not be enough. When you setup PCF dev on your machine you don't face such challenges and memory limits to your machine only.

Installation Steps

I have used MAC OS for this demo so my commands or packages will be specific to that but PCF dev supports Linux and Windows also.
Before starting the installation, we need to register with Pivotal to download the binaries from there which we will need to setup the PCF Dev. Below is the link to register at Pivotal.
Registration URL: https://account.run.pivotal.io/z/uaa/sign-up
Please note that it will need around 8 GB memory and around 100 GB of disk space to setup the PCF Dev.

CF CLI

PCF Dev uses the CLI tool for upload and run applications, so first of all we will install cf CLI which can be setup following below links provided by PCF.
Pivotal link to download CLI: https://cli.run.pivotal.io/stable?release=macosx64&source=github

Once you download the cli, you can install it by executing the package. If you need help then you can check section "Cloud Foundry Command Line Interface" at below link.
Pivotal link to setup CLI: https://docs.pivotal.io/pcf-dev/install-osx.html

Once it is installed you can verify it by executing the command "cf".
CF version check

PCF Dev installation

Follow below steps to install PCF Dev. While executing the cf commands, it requires administrator access to execute them and that's why I have used "sudo" with every command which will require your admin password.
1. First of all you need to download the binary of PCF Dev from below link. Please note that it is around 20 GB in size so make sure that you have enough internet bandwidth.
https://network.pivotal.io/products/pcfdev
At above link you will see similar to below screen. Please select the highlighted binary for MAC OS.
pcf dev downloads
2. Install the cf dev plugin which is required to run PCF Dev.
sudo cf install-plugin cfdev
If you see any error similar to as given below, then it means you already have cf plugin and you may need to remove it to proceed further.
Plugin cfdev v0.0.17 could not be installed as it contains commands with names that are already used: dev.
In this case you can execute below command to check the installed plugin.
sudo cf plugins
You may see below output. Just to mention that pivotal has deprecated the pcfdev plugin and it need to be uninstalled so we can install the cfdev.
plugin   version   command name   command help
pcfdev   0.30.2    dev, pcfdev    Control PCF Dev VMs running on your workstation
Now we need to remove the "pcfdev" plugin by executing below command. Once it removes the existing plugin, we can re-execute the command to install the cf dev plugin.
sudo cf uninstall-plugin pcfdev
3. Now start the PCF Dev using below command. We need to provide the downloaded binary location  from step-1 in below command as "pcf-dev-binary".
Syntax: sudo cf dev start -f <pcf-dev-binary>
Example: sudo cf dev start -f /Users/Downloads/pcfdev-v1.2.0-darwin.tgz
4. To stop the PCF dev execute below command.
sudo cf dev stop

Other posts you may like:

Microservice tutorial using Spring boot

Comments

  1. The information which you have provided in this blog is really useful to everyone. Thanks for sharing.
    Cloud Foundry Online Training

    ReplyDelete
  2. I am inspired with your post writing style & how continuously you describe this topic. After reading your post pivotal tutorial , thanks for taking the time to discuss this, I feel happy about it and I love learning more about this topic. pivotal cloud foundry tutorial

    ReplyDelete
    Replies
    1. Thanks for your kind word. I am very happy to know that you found it helpful.

      Delete
  3. N-Technologies1 March 2020 at 08:35

    Are the deployed application and settings destroyed after the PCFDev instance is shutdown ( (by running => cf dev stop)? I noticed everything gone when I restart the instance (cf dev start -f ). Is there any way to preserve them?

    ReplyDelete
    Replies
    1. I'm about to try PCF Dev too. This is a very good question, I wish somebody from Pivotal answers this

      Delete
  4. Nice and good article. It is very useful for me to learn and understand easily. Thanks for sharing your valuable information and time. Please keep updating.

    python Training in chennai

    python Course in chennai

    ReplyDelete
  5. Thanks for posting the best information and the blog is very helpful. Hyderabad Sweets Shop

    ReplyDelete
  6. Excellent Post. This post makes me very pleased. What amazing knowledge you have shared with us. custom erp software

    ReplyDelete

Post a Comment

Popular Posts

Setting up kerberos in Mac OS X

Kerberos in MAC OS X Kerberos authentication allows the computers in same domain network to authenticate certain services with prompting the user for credentials. MAC OS X comes with Heimdal Kerberos which is an alternate implementation of the kerberos and uses LDAP as identity management database. Here we are going to learn how to setup a kerberos on MAC OS X which we will configure latter in our application. Installing Kerberos In MAC we can use Homebrew for installing any software package. Homebrew makes it very easy to install the kerberos by just executing a simple command as given below. brew install krb5 Once installation is complete, we need to set the below export commands in user's profile which will make the kerberos utility commands and compiler available to execute from anywhere. Open user's bash profile: vi ~/.bash_profile Add below lines: export PATH=/usr/local/opt/krb5/bin:$PATH export PATH=/usr/local/opt/krb5/sbin:$PATH export LDFLAGS=...

Why HashMap key should be immutable in java

HashMap is used to store the data in key, value pair where key is unique and value can be store or retrieve using the key. Any class can be a candidate for the map key if it follows below rules. 1. Overrides hashcode() and equals() method.   Map stores the data using hashcode() and equals() method from key. To store a value against a given key, map first calls key's hashcode() and then uses it to calculate the index position in backed array by applying some hashing function. For each index position it has a bucket which is a LinkedList and changed to Node from java 8. Then it will iterate through all the element and will check the equality with key by calling it's equals() method if a match is found, it will update the value with the new value otherwise it will add the new entry with given key and value. In the same way it check for the existing key when get() is called. If it finds a match for given key in the bucket with given hashcode(), it will return the value other...

Entity to DTO conversion in Java using Jackson

It's very common to have the DTO class for a given entity in any application. When persisting data, we use entity objects and when we need to provide the data to end user/application we use DTO class. Due to this we may need to have similar properties on DTO class as we have in our Entity class and to share the data we populate DTO objects using entity objects. To do this we may need to call getter on entity and then setter on DTO for the same data which increases number of code line. Also if number of DTOs are high then we need to write lot of code to just get and set the values or vice-versa. To overcome this problem we are going to use Jackson API and will see how to do it with minimal code only. Maven dependency <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.9.9</version> </dependency> Entity class Below is ...

Multiple data source with Spring boot, batch and cloud task

Here we will see how we can configure different datasource for application and batch. By default, Spring batch stores the job details and execution details in database. If separate data source is not configured for spring batch then it will use the available data source in your application if configured and create batch related tables there. Which may be the unwanted burden on application database and we would like to configure separate database for spring batch. To overcome this situation we will configure the different datasource for spring batch using in-memory database, since we don't want to store batch job details permanently. Other thing is the configuration of  spring cloud task in case of multiple datasource and it must point to the same data source which is pointed by spring batch. In below sections, we will se how to configure application, batch and cloud task related data sources. Application Data Source Define the data source in application properties or yml con...