Fixing “Unable to load task handler … for task …” in VSTS/TFS 2015 build

Last week I introduced a client to the new TFS 2015 build system. They happily started experimenting with it, but soon ran into a bit of a cryptic error message saying “Unable to load task handler … for task …”.


It turns out that the solution was pretty simple: the version of PowerShell that’s running on the build agent machine needs to be at least version 4.0. You can easily check this by typing “$PSVersionTable” in a PowerShell window. The “PSVersion” should read at least 4.0.


After upgrading the PowerShell version on the build machine, all was fine!

You can download the Windows Management Framework, which includes PowerShell, here:

I hope this helps!

Happy building!


Using Docker tools for Visual Studio with a Hyper-V based Docker host

In the past few weeks I’ve been playing around with containerizing an ASP.NET Core application using the Docker tools for Visual Studio. This allows you to develop and debug your app locally inside a Docker container. To do this, you’ll need a local Docker host. While you could ask your IT department to provide one for you, I found it much more convenient to run a virtual machine locally on my laptop, so I have it available everywhere I go. To create a local Docker host, you need to use the Docker Toolbox. This will use VirtualBox to create a local virtual machine which will serve as your Docker host. However, I already had Hyper-V installed as a virtualization hypervisor. Hyper-V works great on Windows 10, so I wanted to keep that. Sadly, VirtualBox doesn’t play nice with Hyper-V (in short, VirtualBox won’t install if Hyper-V is enabled).

Solution: create a local Docker host on Hyper-V. Unfortunately, the process is a little finicky, so I thought I’d describe it here.

(Note: of course you can also use Azure to run a Docker host. However, for the “Edit & Refresh” experience to function, you need a shared drive between your Docker host and your local development machine. Because of common firewall/network restrictions, this is often not so easy to achieve with an Azure VM. That’s when a local Docker host is nice to have.)


You’ll need to install a few bits before you can work with the Docker tools for Visual Studio:

Setting up the network

Next thing to do is create a local Docker host. First, let’s get our Hyper-V infrastructure set up.

We’ll need to create a Virtual Switch to which the Docker host will connect, and make sure that it has internet access. Inside Hyper-V manager open up the Virtual Switch Manager and create a new virtual switch. Give it a name and make sure that it’s an Internal network:


We’re using an internal network here, to make sure that the IP address stays the same, even through reboots or when you connect your development machine to a different network. The downside of using an internal network is that it’s not connected to your external network, and so things like downloading Docker images from public repositories or restoring NuGet packages from public feeds will not work. To overcome this, we can use the Internet Connection Sharing feature to share our internet connection with the newly created internal network.

Open up the “View network connections” window (in Windows 10, just search for “network” and it will appear):


From there, open up the properties of the adapter which is your internet connection (in my case it’s a bridged adapter) and enable Internet Connection Sharing from the Sharing tab. Select your internal network as the Home networking connection:


Create the Docker host virtual machine

Next thing to do is create the virtual machine that will be your Docker host. The Docker Toolbox provides a nice little commandline tool to manage Docker hosts: docker-machine. You can use this to create, start, stop or delete a Docker host.

Open up a PowerShell prompt and issue the command to create a Docker host:

$ docker-machine create --driver hyperv --hyperv-virtual-switch "<your virtual switch>" <your vm name>


It will take a bit of time, but when it completes you should see your new VM running in Hyper-V manager:


The last thing to do is set your environment to use the newly created VM as the active machine. You can do this by typing in a PowerShell prompt:

$ docker-machine env <your vm name> | Invoke-Expression

Afterwards you can type “docker-machine ps” to verify that your new VM is indeed active:


Sharing a drive to make “Edit & Refresh” work

To make the “Edit & Refresh” experience work, the Docker tools for Visual Studio expect that the folder in which you keep your code on your development machine is shared to the Docker host VM at the same path. The Docker toolbox takes care of this automatically if you use VirtualBox. However, when using Hyper-V you will need to take care of this yourself.

In my case, I keep all my code on my Windows machine under “D:\Git”. We’ll need to share this to the Docker host VM, but since this is a Linux VM, the path will look a little bit different: “/d/Git”. We’ll use normal Windows network sharing to achieve this.

First, share the folder on your Windows machine. Open up the properties of the folder and go to the “Sharing” tab. From there, share the folder:


Now, we need to connect to this shared drive from the Docker host. To do this, first connect to your Docker host using SSH. You can easily do this by using:

$ docker-machine ssh <your vm name>


Then, create the Linux equivalent path of your shared folder (in my case “/d/Git”) and then connect to it using “mount”:

$ sudo mkdir –p <your path>
$ sudo mount –t cifs // /d/Git -o user=keesv,pass=<your password>,domain=<your domain>

Of course replacing the correct IP (of your Windows development machine), paths, username and password. If you’re not using a domain account, you can leave out the “,domain=<your domain>” part. Now check that the contents of the shared folder is available by using “ls /d/Git”:


You’ll notice that the shared drive is no longer connected when you restart your Docker host VM. To avoid having to reconnect it every time, we can make the connection persistent. To do this (while still connected through SSH), create a script which will be called upon each boot:

$ sudo touch /mnt/sda1/var/lib/boot2docker/

Then open up the file for editing (I’ll use vi here, since it’s available by default):

$ sudo vi /mnt/sda1/var/lib/boot2docker/

To start editing in vi, hit “i”. Then type the following (replacing for your configuration where appropriate):

mkdir –p /d/Git
mount –t cifs // /d/Git -o user=keesv,pass=<your password>,domain=<your domain>


Exit vi by typing Ctrl + c, then “:wq” and hit <enter>.

Now reboot your VM, SSH back in and check that the shared folder is available:


Running an ASP.NET Core app

To see that all the bits are working together, we’ll create a very basic ASP.NET Core app and run it in a Docker container on our freshly created Docker host.

First, create the app. Make sure that it’s created inside the folder that you just shared to your Docker host!



Then, use the Docker tools for Visual Studio to add Docker support to the project:


We’ll need to specify the Docker host on which we want to run the container. This needs to be a host which is known by docker-machine. Open up the “Docker.props” file and change the machine Docker machine name:


You’ll need to restart Visual Studio after making this change, so do that now. When you have your project re-opened, set the debugging target to “Docker” and hit F5:


This will build the Docker image inside your Docker host VM and then run a container based on it. After a while, the web page should pop up:


Notice that the Visual Studio debugger is connected to your app running inside the container, so you can set breakpoints, use watches and all the other goodies that make Visual Studio so nice for debugging!

Happy Dockerizing!

Debugging your VSTS extension

Recently I’ve been working on developing some extensions for Visual Studio Team Services (VSTS). Being able to develop custom extensions is great, since it enables you to extend the service with features that fulfil your needs.

Creating an extension for VSTS consists of a few steps:

  1. Develop the code
  2. Package the extension
  3. Publish it to the marketplace
  4. Install it in your VSTS account
  5. Start using your extension!

In order to use your extension inside VSTS, you have to go through all these steps. When you’re still developing and debugging your code, this makes for a very painful process. There is a neat little trick that you can use to overcome this, which makes debugging a VSTS extension much easier. It is described somewhere in the documentation, but it’s very well hidden so I thought I’d write a blog post about it.

How extensions work

Extensions for the VSTS web interface are actually web pages which are loaded inside an iframe in the VSTS UI. VSTS supports many extension points, and you can use whichever one suits your purpose best. For example, an extra tab on the work item form, a dashboard widget, an additional hub group or even an entire hub. When VSTS loads the html page for your extension, it will become visible in the UI. Inside your html page you include the VSTS extension SDK (“VSS.SDK.js”). That will take care of properly loading your extension inside the VSTS UI and will make some services available to your extension for getting data from and to VSTS. To put it all in a picture:


In order for your extension to load in VSTS, VSTS has to know some things about it, like: where to load your extension (e.g. as a dashboard widget or hub group), which html page to load and which permissions are required. These things are configured in your extension manifest. This manifest is used when packaging and publishing your extension to the marketplace. From there, it can be installed in your VSTS account.

The manifest also specifies from where your extension html should be loaded. If you have a simple extension with just a few static html/javascript/css files you can package these with your extension and they will be hosted inside VSTS. If your extension is more complex (e.g. you have some ASP.NETpages) then you will need to provide your own hosting and specify that in your manifest.

Preparing the debugging

When I’m developing an extension, I’m using Visual Studio. Of course I want to be able to use all of Visual Studio’s debugging power and do quick iterations (write code, hit F5, check if it works). In order to make this work we’ll trick VSTS to load our extension html from our local machine. Of course this won’t work for production scenario’s, but for debugging it’s fantastic.

In order to do so we’ll create a special manifest for development (in my case: “vss-extension-debug.json”). Inside that manifest, there are a few properties that we’ll change especially for debugging purposes:

  "id": "devdemoextension",
  "name": "Dev: Demo Extension",
  "baseUri": "https://localhost:44300",

  • id: the id has to be unique across all your extensions. I usually prepend “dev” for my development versions.
  • name: this will be displayed in the VSTS marketplace and when you’re installing the extension in your account. I usually prepend “Dev:” so that I can identify the development version
  • baseUri: this is where the magic happens. This tells VSTS to load the extension from your localhost, where you have your development version running in IISExpress from Visual Studio.

Note: you have to run IISExpress in SSL mode, because VSTS demands that your extension is served from a secure source. You can enable SSL mode in the properties of your project in Visual Studio:


Now package your extension using the manifest modified for debugging:

vset package -m vss-extension-debug.json

Upload the resulting .vsix file to the marketplace and share & install it in your account. For instructions on how to do this, you can refer to the Visual Studio site.

Doing the debugging

Now that we have all the Yak shaving done we can get to the actual debugging. The easiest way to do this is to change the properties of your project in Visual Studio and set the Start URL to the location where your extension will load:


This will make sure that the debugger is attached to the correct browser process.

Now hit F5 and you’ll be able to use the full power of the Visual Studio debugger!

Happy debugging!

Using environment variables in build vNext

The new build system which was introduced in  Visual Studio Team Services (VSTS) and TFS 2015 has been gaining more popularity lately, and (in my opinion) rightly so. It simply works far better than the old XAML builds. I’ve been converting quite a few XAML based builds to the new build system. While doing this, I (inevitably of course) came across quite a few custom build workflows and tasks. Some of these were easily converted to build tasks that are available out-of-the-box, while others required some custom tasks. When creating a custom build or a custom build task, you’ll at some point need some build-specific information. This is where environment variables become necessary. Since I wasn’t too familiar with the concept (I’d heard of them, but never really used them) I decided to dig a little deeper.

What is an environment variable?

Environment variables are a concept within the operating system. They are a set of key-value pairs which are specific to the environment of your current process. The use of environment variables is well known on most operating systems, such as Windows, Linux & Mac OS X. This makes them very useful in cross-platform scenarios, such as a build agent.

Some examples of common environment variables are “PATH” (a list of directories to search for a command you’re executing), “TEMP” (the path to store temporary files) and “WINDIR” (the path to the Windows installation directory).

Environment variables in the build process

In the context of a build job, the current process will be the execution of your build definition. So, any environment variables created in this context are available to all the tasks in your build definition! This means that you can use environment variables inside your build tasks to control the execution of the build task. Also, you’ll be able to pass information from one task to other tasks downstream in the build process.

There will be multiple sources for setting the environment variables:

  • The operating system of your build server: will set server level variables such as “TEMP” and “PATH”,
  • The build agent: will set some variables specific to running a build, such as the agent working folder and things like the build definition that’s being executed and the build number,
  • The build definition: any variables that you set on the “Variables” tab in your build definition (either in the definition itself or when queueing a build) will be available as environment variables in the build job.

For more information regarding environment variables in your build definition you can refer to:

Showing all environment variables

The MSDN page above gives a list of pre-defined variables that are availabe inside your build job. While it provides quite a bit of insight, this wasn’t enough for me. I wanted to know all environment variables that are available, as well as their actual values during the execution of a build. As it turns out, printing all environment variables and their values is actually not that hard. In PowerShell this can be achieved with:

Get-ChildItem Env:

And in Node.js there is an object that contains all environment variables:


You could put this inside the code of any custom build task that you’re developing. However, I decided to wrap this in a small custom build task that will print all environment variables and their corresponding values. This will allow for easy debugging by just including the task anywhere in your build process. You could even include it multiple times to see if and how any environment variables were changed between tasks.

The output of the task looks something like this:


As you can see, there are many environment variables defined. In my case, there were 114…

The code for this task can be found on GitHub. Feel free to use it!

Hopefully this will help you to understand what’s going on in your builds.

Happy building!

[This post is published in Dutch on the Delta-N blog]

Uploading a custom build.vNext task

[Update 2015-08-12] Microsoft has just made a new command-line utility called “tfx-cli” available which allows you to create, upload, delete and list build tasks. You can use that, instead of the “TaskUploader” described in this post. I’ll leave this post up here, since the part about building the tasks is still useful.
“tfx-cli” is distributed through npm and can be found here.
A nice walkthrough by Jeff Bramwell can be found on his blog here.

With the introduction of the new build.vNext build system in Visual Studio Online and TFS 2015 (currently in RC) creating a build definition has become a lot easier. There are already quite a lot of tasks available, that allow you to build, test and deploy your software from the build engine. However, there’s always something more: what if you want to implement your own custom task? Fortunately, this is possible! You can write your own task as a Node.js application and make that available in VSO or your on premise TFS 2015 server.

You can look at the vso-agent-tasks repository on GitHub for the source code of the tasks that are made available by Microsoft. Based on that, you can also create your own task!

This post will show you how you can build and upload a custom task to VSO or TFS. For this post we’ll use the “SonarQubePostTest” task, which is not yet publicly available on VSO. However, the code is already there in the Git repository.


As usual, you’ll first need to get some prerequisites in place. You’ll need to install Node.js and npm from Node.js. After installation, ensure that you have at least Node 0.10 and npm 1.4:


I have Node 0.12.7 and npm 2.11.3, so I’m good to go!

Now install gulp by typing “npm install gulp -g”.


Finally, make sure you have a Git client installed. I’m using the built-in client that’s in Visual Studio 2015, but you can use any Git client you like.

Cloning the repository

You’ll need to have a local copy of the vso-agent-tasks Git repository. You can get one by cloning the repository from GitHub. When using Visual Studio 2013 or 2015, you can do this by going to the “Connect” tab in Team Explorer. Then, click the “Clone” button and enter the URL for the Git repository and a local folder. When you click the “Clone” button, you’ll get a local copy of the repository.


In my case, I cloned the repository to “C:\vso-agent-tasks”

Building the tasks

Before you can upload the tasks, you’ll need to build them. And of course, we’ll first need to fetch some dependencies. To do that, “cd” to the root of the repository (“C:\vso-agent-tasks”) and execute “npm install”.


Then, again from the root of the repository, execute “gulp” to build the tasks.


It’ll take a few seconds and should finish successfully.


You’ll end up with a “_build” directory which contains the built tasks.


Uploading the task

This is the fun bit, which isn’t documented really well. In the vso-agent-tasks repository there is a small utility called the “TaskUploader”, which will allow you to upload a task to VSO and/or TFS. We’ll first need to build it. This can be done by executing “gulp uploader” from the root of the repository.


You’ll get a built version in the “_build” subdirectory.


This is a Node.js application that allows you to upload a task to VSO and TFS. You’ll need to provide two arguments:

  • The URL of the collection where you want to upload the task to
  • The directory of the task that you want to upload

For now, we’ll upload the “SonarQubePostTest” task from the “_build\Tasks\SonarQubePostTest” directory to my VSO account. The full command line, starting from the root of the repository is:

node _build\TaskUploader\taskUploader.js <your VSO URL> _build\Tasks\SonarQubePostTest

You’ll be asked for your username and password. These need to be your alternate credentials.


After a while, you’ll see a bunch of Json data as the response of the server. In there, you should find a statusCode and statusMessage, which should tell you that the task was created.


Now, when you go and create build definition, you should be able to add the newly uploaded task there:


What about On-premise TFS?

Because the taskUploader relies on “Basic Authentication”, you’ll need to enable that on IIS on your TFS application tier. By default, this is disabled.

To enable “Basic Authentication”, go into “Add Roles and Features” and make sure that “Basic Authentication” is installed (under “Web Server (IIS)”, “Web Server”, “Security”).


Then, go into IIS manager and navigate to the “tfs” node under “Team Foundation Server” and open the “Authentication” module.


Then, enable “Basic Authentication”:


After that, you’ll be able to upload the task to your on-premise TFS server. Just make sure you’re using the URL for your collection and of course a valid user.


Happy building!

Building Java code on Linux using VSO Build.vNext

OK, one more post for today… This morning I promised you I would show you how to build Java code on a Linux build agent using the new Build.vNext build system in Visual Studio Online. This post is to deliver on that promise. Being a Linux and a Java newbie, I thought it would be a complex task. As it turns out, it isn’t. Once you have all the bits in place, the great new build system makes it easy to create a build definition that compiles the code. Actually, I even stuck some unit tests in there and got the test results to publish to VSO.

Before getting started, you should have two things in place:

Project structure

For this post, I created a small demo Java project in Eclipse. I won’t go into all the details on how to do this, but there are a few things that I’d like to point out:

  • The project uses Apache Maven to manage the build process. Maven is widely used for Java projects and support is integrated in Eclipse, as well as in Build.vNext.
  • The project consists of a simple “Hello World” application. The main class file (“”) contains code that prints out a “Hello World” message and a method that performs an addition (not actually used anywhere).
  • There are two unit tests for the project, which are defined in the “” file. These tests are written using the JUnit framework.

Prerequisites on the build agent

If you followed my post on creating a Build.vNext agent on Linux you’ll have most of the bits already installed on your agent. We only need to add some tools which are required specifically for building Java applications. Experienced Java developers will know this, but for all other developers I’ll mention them here.

  • Java Development Kit (JDK): this contains the basic tools for compiling your source code
  • Maven: the tool that will actually execute our build

You can install both by typing “sudo apt-get install default-jdk maven” in a console window.

imageYou’ll notice that you will need A LOT of dependencies. I’m sure happy that apt-get is there to take care of installing those! Just type “Y” to get it going.

image If you’re in need of a cup of coffee, this is the time to go and get it… After a while, all dependencies will have been installed. Now, restart your build agent so that it will pick up on the newly installed tools and register them as “capabilities” for the build service.

image When you go into the VSO control panel and look at the properties of your build agent, you’ll notice that the agent has picked up on the newly installed Maven binary and registered it as a capability for itself.

imageWe now have all the prerequisites figured out. Let’s get building!

Creating the build definition

The user experience for the Build.vNext system is entirely web-based. So, to create a Build.vNext build definition you’ll need to use the web interface. So, fire up your browser and connect to your VSO account. Navigate to the team project which holds your Java code and navigate to the “Build” tab. Then, click on the green “+” sign to create a new build definition.

image There are a couple of templates pre-defined by the VSO team for Visual Studio, Xamarin and Xcode builds. Since we’re not doing any of that, we’ll start with a clean slate and select the “Empty” option.

image This will create a new build definition, which currently does not contain any build steps.

image We’ll start by configuring the repository from which we want to build the code. Navigate to the “Repository” tab and select your repository type and your repository and branch. Additionally, I’ve specified “Clean = true” so that the working directory of the agent is cleaned before each build.

image On the “General” tab, set the default queue that this build definition should use. Since I want to build on Linux, I’m specifying my “Linux” build queue. Optionally, you can specify a description and a build number format.

image If you want, you can go ahead and change the retention policy on the “Retention” tab. For now, I’m not bothering with that. We’ll now go ahead and define the build process itself. For that, navigate to the “Build” tab and click “Add build step…”.

imageYou’ll be presented with a screen in which you can select from the available build tasks. The list is already quite long, but will likely be expanded in the future. You’ll notice that there is already a task there to perform a Maven build. We’ll go ahead and add that to our build process.

image In the configuration of the Maven build task, we need to specify our POM file. In the “Options” field, I have specified “-e” so that Maven will print out a stack trace if anything goes wrong. That’ll help in debugging. The default goal is “package”, which is fine in our case. We’ll also leave the “Publish to VSO/TFS” check box marked for the JUnit Test Results so that we’ll be able to analyze our unit test runs later.

image Next, we’ll add another build step for publishing our built binaries to the server, so we can get them from there later.

image We’ll configure the “Publish Artifact” task to copy everything in the “target” folder to the server in an artifact called “drop”. Finally, set the “Always Run” checkbox to on, so that artifacts like buildlogs are published, even if a previous build step fails.

imageFinally, save your build definition and give it a useful name. Since build definitions are versioned in Build.vNext, you’ll also need to provide a comment which will appear in the history of the build definition.

image That’s it! Our build definition is ready. Now we can run the build…

Running the build and viewing results

To queue a new build, click the little triangle before your build definition and click “Queue build…”.

imageOptionally, you can change the Queue, Branch or specify a commit id which you want to build. For now, just click “OK”.

image  The build will start running and after a while you should be seeing the green “Build Succeeded” message!

imageDid you notice that we never specified on which operating system our build is running? This is one of the cool features of Build.vNext! Since the tasks are written using  Node.js they can execute on any platform. You can click the build number to navigate to the “Build summary” page.

image The page will give you a basic overview of how the build went. Click the reload icon to load the test results. You can view details by click on the name of the run.

imageYou’ll be able to view detailed results for the test run. Remember, this is Java code being tested by JUnit!


For getting the binaries that were produced by the build, go back to the build summary page, click “Artifacts” and then “Explore”.

imageYou’ll be able to browse through the build output. This includes compiled classes, test reports and of course also the final .jar file. You can download a file by clicking the little triangle and then “Download”.

image  And there you have it! Java code, imported into VSO from Eclipse, built on a Linux build agent. While the application that I’ve built in this post in itself isn’t very useful, I do think it is a very nice demonstration of the direction that Microsoft is headed in: develop, build and run on many platforms.

Happy building!

Importing Java code into Git on Visual Studio Online from Eclipse

Last time I wrote about Creating a Build.vNext agent on Linux. I ended that post with the idea to build a Java application on Linux. The first thing that we’ll need is some Java code in VSO to build. Being a Java newbie, it took me quite a bit of figuring out on how to do that. I thought I’d share my experiences here, for you to enjoy.

Install Eclipse and Team Explorer Everywhere

The first thing you’ll need to do when working with Java is installing a Java IDE. You can download it from here. I chose to download the “Eclipse IDE for Java EE Developers”, since that already includes some things that are usefull later (like Maven integration). Installation is as simple as unzipping the files to a folder of your choice. However, I did have a strange issue when using the default Windows unzipper, resulting in Eclipse not starting up. When using 7-Zip, all is fine. Once you get Eclipse running, you’ll see a nice empty interface.

imageNext we’ll install Team Explorer Everywhere.  Go to “Help”, “Install New Software…”

imageThen click the “Add” button to add a repository.

imageEnter the details for the Microsoft Eclipse repository and click OK.

image   Next, make sure the Microsoft repository is selected, check the TFS plug-in and click Next.

imageAfter finishing the wizard the TFS plug-in will be installed and you’ll need to restart Eclipse. You can open the plug-in by clicking “Window”, “Show View”, “Team Explorer”.

imageFinally, click “Connect to Team Foundation Server” and select your VSO account and project.

image Voilà, you are connected to VSO from Eclipse!


Upload code into repository

Before we can build something, we’ll need something to build, obviously. I’ve put a demo application on my GitHub, which you can download here. Unzip the file and put the “Demo/hellomaven” folder somewhere on your hard drive. In my case, I used “D:\Demo”.

image In my VSO project, I have created an empty Git repository to hold the code. We can use Eclipse to push the code there. First, we’ll need to clone the repository. In Eclipse, go to Team Explorer and click “Git Repositories”. Then right-click your repository and click “Import repository”.

imageIn the wizard, select your repository and click “Next”.

imageOn the next page, select a local folder where you want to clone the repository. A subfolder with the name of your repository will automatically be created. Don’t forget to provide your credentials. If the account you used to log-in to Windows is something different than what you’re using for VSO, you’ll need to provide your alternate credentials here.

image Since there is nothing in the repository yet, we can’t select an initial branch…

imageNext, the wizard will ask if you want to import or create Eclipse projects from this repository. Since I’ll import my demo project into this repository later, I’ll select “Import detected Eclipse projects”.

image Since the Git repository is empty, there will be no projects to import. That’s OK, we’ll add a project later. Click “Finish” to start cloning the repository. It’ll appear as if nothing happens, but you will get a (empty) copy of the repository on your local disk.

imageNow we can import our code! First, we’ll open the project in Eclipse. To do that, right-click in the Package Explorer (“Window”, “Show View”, “Package Explorer” if it’s not visible) and select “Import…”.

imageSelect “Existing Projects into Workspace” and click “Next”.

image   Browse to the directory where you unzipped the demo project (or your own code of course), make sure your project is checked and click “Finish”.

imageThis will import your project into your workspace and make it visible in the “Package Explorer” window. From there, we can move it into our Git repository. Right-click the project and select “Team”, “Share project”.

image  Select “Git” as the repository type and click “Next”.

imageSelect your repository in the dropdown list, make sure your project is checked and click “Finish” to move the project into the Git repository.

imageNow all we need to do is commit and push our code to VSO. To do that, right click your project and choose “Team”, “Commit”.

imageType a meaningful commit message (you always do this, right?), select all files and click “Commit and Push” to start pushing everything to VSO.

image Since this is the first time we’re pushing, we’ll need to select a remote branch to push to. In this case, I’ll go with the default “master”.

imageFinally, click “Finish” to start the magic…

imageIf everything went smoothly, you’ll get a result window without any errors.

imageYou should also see your code in the “Code” tab of the VSO web interface.

imageSo that’s it! You now have your Java code in a Git repository, hosted on Visual Studio Online! You can now start using Git from Eclipse, where all the usual Git stuff (commit, branch, merge, history, rebase, etc) is available under the “Team” submenu in the “Package Explorer”.

imageYou can also use the “Git Staging” view, which will give you a nice and easy way to select the files that you want to commit. Just drag files from “Unstaged Changes” to “Staged Changes” to include them in your commit.


Hopefull this will get you up and running with Eclipse, Git and VSO quickly! My next post will be about using VSO Build.vNext to actually build the code.

Happy Gitting!