Usage Guidelines for HttpClient

The HttpClient class is used in modern .NET applications to make HTTP requests. It was introduced in .NET 4.5 as a replacement for HttpWebRequest and WebClient.

This post collects some usage guidelines for HttpClient that may not be obvious.

Reuse HttpClient instances

Per MSDN, “HttpClient is intended to be instantiated once and re-used throughout the life of an application.” The rationale is mentioned in that MSDN article and described in detail in this blog post.

All HttpClient methods for making HTTP requests are thread-safe, so as long as you don’t change any of its properties (BaseAddress, DefaultRequestHeaders, Timeout, etc.) after you start making requests, reusing the same instance on any thread will be fine.

The simplest way to reuse HttpClient instances is to create a static readonly field in the class that needs to make HTTP requests. Of course, that still results in one instance per class, so consider allowing the HttpClient instance to be specified via constructor or method parameter.

HttpClient is disposable, but there’s no harm in never disposing it if it is used for the life of the application.

Example: The HttpClientService class of Facility.Core accepts an HttpClient instance in the constructor, but if it isn’t specified, it uses a static readonly instance that is never disposed.

Dispose HttpRequestMessage and HttpResponseMessage

The SendAsync method is the most flexible way to send an HTTP request with HttpClient. It accepts an HttpRequestMessage and returns an HttpResponseMessage. Note that both classes are disposable; be sure to dispose of instances of both classes when you are finished with them.

Example: FacilityCSharp has tests that run a number of HTTP requests in a row that POST a JSON body and process a JSON result. After a few dozen requests, HttpClient would asynchronously deadlock. I never got to the bottom of it, but I did find a solution: dispose the HTTP requests and responses.

Handle timeouts properly

When an HTTP request times out, an OperationCanceledException exception is thrown, not a TimeoutException.

The default timeout is 100 seconds. If you are using a single HttpClient instance (see above), you’ll want to make sure that HttpClient.Timeout is set as high as any request might need.

To use a shorter timeout for certain requests, use CancellationTokenSource.CancelAfter and send in the corresponding cancellation token. If you already have a cancellation token, use CancellationTokenSource.CreateLinkedTokenSource and call CancelAfter on that. If you need to distinguish between the two types of cancellation, check if the original cancellation token is cancelled.

Example: This StackOverflow answer demonstrates combining a cancellation token and a timeout.

Respect DNS changes

I haven’t attempted to reproduce this behavior, but apparently using a singleton HttpClient doesn’t respect DNS changes. For more information, read this article and this article.

Apparently you can use ServicePoint.ConnectionLeaseTimeout to work around this problem (see linked articles).

Aside: FacilityCSharp does not address this problem, but if you’re a Facility user and experience it, please file an issue and let us know!

Posted by Ed Ball on March 30, 2017

Getting Hired at Faithlife

Interviewing is hard. It is difficult to measure the whole of a person’s experience, expertise, and potential in such a short period of time. As hard as it is for me, I know that it is even harder for you. I do my best to make everyone comfortable, but it is hard to help people get past the nervousness. I think it will help if you are prepared. So, in an effort to help you do your best, I’ve prepared a few notes on getting hired at Faithlife.

Your Interviewers

First, a little bit about your interviewers. We are all engineers actively contributing code at Faithlife. We all volunteered to spend time with you and we all want you to do well. If there is something we can do to make you more comfortable during your interview, let us know.

How to Apply

The Minimum

Send us your GitHub, LinkedIn, StackOverflow, or a quick note. If it can be established that you meet the requirements from what you sent, you will hear from a recruiter.

Make an Impression

Make a good impression. It may be enough to get you a phone interview if you are a little short of the requirements on paper or carry you through a misstep in a subsequent interview.

Before you do anything else, do some research on Faithlife. Be able to answer this question: “Why do you want to work at Faithlife?” You will be asked this question in at least one of the interviews; likely all of them. You should be able to answer it. Write a cover letter (which answers the aforementioned question) and attach a resume. If you want to stand out when I evaluate your application:

  • Be interesting. A well written cover letter will make your application stand out. Write something that highlights your personality. Help me get to know you.
  • Contribute to one of the Faithlife open source projects
  • Contribute to someone else’s open source project
  • Do something else interesting. When I applied, I spent about an hour putting together a simple Bible search sample program using Lucene and C#. It was very basic, but demonstrated that I could write functioning code and knew something about Faithlife’s products.

Phone Interview

You got a response and your first interview has been scheduled. Great job! We are going to sit down for 30-45 minutes and chat. My goal will be to evaluate whether you have a basic understanding of data structures and proficiency with your preferred language. I will ask some questions and you will write some very basic code. FYI, when I ask you, “what is your preferred language?” which language you choose will not have any impact on your outcome. It will just tell me which questions I should ask. Choose the language you want to answer questions about. Your preparation should include:

  • Practice answering interview questions on data structures and your preferred language.
  • Practice writing code on one of the many practice sites. Pick your favorite, but here are a couple examples:
  • What is important to you about your working environment? The people you will be working with? The projects and products you are working on? When you have the opportunity, ask good questions.


We are experimenting with replacing a portion of the in-person coding with a homework project. If you are selected to participate, you will be assigned a small project to complete on your own time. Pay attention to style. Make sure it compiles and functions as intended. Take some notes. Be prepared for an in-person code review with questions focusing on decisions you made, implementation details, etc. If you have time, do the bonus work.

Panel Interview

Nailed it. Your panel interview has been scheduled. You will be meeting with 3 engineers for about an hour and a half. We will discuss your past work and you will write a lot more code. Remember, your interviewers want you to do well. We are all pulling for you. Your preparations should include:

  • Review any recent work that you’re proud of and be ready to discuss it in detail. Demonstrate your passion for these projects and what you learned from each.
  • More practice writing code on one of the aforementioned practice sites.
  • Practice testing your code. Pay attention to base and edge cases.
  • Practice talking through the code. Demonstrate that you understand what it’s doing and explain why. An incomplete solution that is well defined with a plan explained is usually better than a complete solution that is not articulated clearly.
  • What is important to you about your working environment? The people you will be working with? The projects and products you are working on? When you have the opportunity, ask good questions. Sound familiar? Don’t be afraid to ask the same questions. Get some different perspectives.

Pair-programming Interview

Congrats! The process is selective and you are among a very small number of candidates that make it to the pairing interview. You will be spending a day with us at either the Tempe or Bellingham office. What to expect:

  • When you arrive someone will show you around our campus and tell you about the different departments. Grab some coffee, or if you agree with me and think coffee is foul, grab a soda.
  • The interview portion will involve 2 pairing sessions, each lasting around 2 hours. You and your partner will be working on a problem, either real or from a set of interview pairing projects. We are evaluating your ability to communicate, collaborate, and ship working code. Your preparations should include:
    • Become familiar with one of the languages we use during this interview: C#, JavaScript, Objective-C, C++, or Java. You will be evaluated based on your experience, but building a basic understanding before you arrive will make you productive faster.
    • Before you get started, talk through the problem with your partner. Ask questions and build a plan together. We are evaluating your ability to communicate and collaborate as much as your ability to be productive.
    • Plan how you will test.
    • You may not finish the project. That is ok. Use the time you have to demonstrate your ability to solve problems with code productively and collaborate.
  • If you are with us on a Thursday, you will have an opportunity to attend “Demo day.” No need to prepare a demo. All of us (the engineers) and a few folks from the management and executive teams meet weekly to show off what we are working on and learning.
  • After demos, you will head out to lunch with a few engineers. This is a good time to ask questions about the culture at Faithlife and get to know some of your potential coworkers. Plan what you want to talk about, show passion and interest here.
  • Ask the good questions you prepared again.

Interviews with the Development Manager and the CEO

These may be held the same day you are on-site or at a later date, depending on availability. I haven’t performed any of these interviews, so I don’t have much advice. Ask good questions. Research the people who are interviewing you. Bob (our CEO) has written many helpful articles; like this one. The research will give you insight into how Faithlife runs and will likely inspire a few great questions. Good luck!

The Aftermath

If it went well, you have a start date! Congrats!

Connect with your interviewers and the other folks you met at Faithlife. Introduce yourself and ask for advice on preparing for your first day. If you have trouble finding those folks, connect with me. I would love to chat with you about your experience interviewing and answer any questions you have about working here. During your first few weeks at Faithlife, take some time to reach out and invite your interviewers to lunch (if I was one of them, I’ll buy). Ask them what you could have done better. Tell us how we can improve. This is the only time we can give constructive, personal feedback about interviews. Any constructive feedback you have in return is greatly appreciated. We all want to get better. Embrace openness (core value!) and help us get there.

If it wasn’t a great fit…

Interviewing is hard and we don’t want to take a chance on a bad placement. If we aren’t a confident “Yes!” the answer is “No.” We pass on some possibly great candidates because there just wasn’t enough evidence in the process to convince us. We value growth (core value!), so we are working on improving our hiring processes. If you have any feedback about your experience, we would love to hear it. These are the two most common reasons we have to pass on people and what to do about them:

  • Candidate hasn’t been pursuing mastery of their craft. They didn’t show evidence of enthusiasm for software engineering. If this is you:
    • Start reading. This is a good list to get you started.
    • Learn the ins and outs of the languages you are using day-to-day. You should have a firm grasp of how those technologies work, why they work that way, and where those technologies start to break down.
    • We move fast. Constant growth is an expectation. If you aren’t already actively pursuing knowledge and improving your technical skills, start.
  • Candidate doesn’t know their own weaknesses or limits. If this is you:
    • Growth (core value!) requires constant self-evaluation and the pursuit of actionable, critical feedback. If you aren’t aware of your shortcomings, you may not be doing either of these, start.

Keep pushing. Give it a few months or a year, then give it another shot. There isn’t any rule against re-applying. The next time you apply, tell us about all of the things you have done in the interim to improve. Recovering and learning from failure shows maturity which is an important part of being an effective engineer.


Remember, it is safe to assume goodwill. We want you to do your best. We know it is a difficult situation and we want to make it as comfortable as possible. Take your time, take a deep breath, and take us up on our offer of a beverage. If you need it, take a break. Best of luck!

Thank you Auresa Nyctea, Dave Dunkin, Leigh VanderWoude, Patrick Nausha, Todd White, and all of the other folks that helped put this together.

Posted by Jared Wood on February 03, 2017

Async and Await in WPF

In our last video, we looked at using the async and await keywords in a console application. This week’s video uses async and await in a WPF application.

First we create a simple event handler using async and await. Then we simulate what happens behind-the-scenes with await by implementing the same behavior using continuation tasks. Just like await, we capture the SynchronizationContext and use the Post method to run the continuation on the UI thread.

Next we use DBpedia’s SPARQL endpoint to asynchronously execute a query against its structured data from Wikipedia. We then see what happens when an exception is thrown in an awaited task.

Stephen Toub has an excellent three-part article (2) (3) on await, SynchronizationContext and console apps.

Posted by Scott Fleischman on June 03, 2013

TAP Using Tasks and Async/Await

With our two previous videos on Starting Asynchronous Work Using Tasks and Continuation Tasks, we are in an excellent position to use the new async and await keywords in C# 5.

This week’s video converts a simple synchronous method to async following the Task-based Asynchronous Pattern (TAP) using two different implementations.

  1. The first implementation uses Tasks, continuations and Task.Delay
  2. Then second uses the new async and await keywords, resulting in code that is very similar to the synchronous version. It also uses the Task.WhenAll method to asynchronously wait on multiple tasks.

For further reading, see:

Posted by Scott Fleischman on May 23, 2013

Continuation Tasks

Last week I posted a video on Starting Asynchronous Work Using Tasks. This week’s video is on Continuation Tasks. Continuation tasks allow you to control the flow of asynchronous operations. They are especially useful for passing data between asynchronous operations. Continuation tasks are normally created using the Task.ContinueWith method. They also can be created using methods like TaskFactory.ContinueWhenAll.

Posted by Scott Fleischman on May 17, 2013

Using native DLLs from ASP.NET apps

By default, ASP.NET uses shadow copying, which “enables assemblies that are used in an application domain to be updated without unloading the application domain.” Basically, it copies your assemblies to a temporary folder and runs your web app from there to avoid locking the assemblies, which would prevent you from updating those assemblies.

This works great for managed DLLs, but not very well for native DLLs. ASP.NET doesn’t copy the native DLLs, so the web app fails with an error that doesn’t even make it obvious that a native DLL is missing or which one it is.

The simplest solution I’ve found is to turn off shadow copying, which causes the DLLs to be loaded directly from the bin directory. This is the strategy now being used in production by Just add a <hostingEnvironment> element to your web.config:

    <hostingEnvironment shadowCopyBinAssemblies="false" />

This also works for local development, but you may find that you need to restart the app pool in order to rebuild the app. Biblia has a pre-build step that runs appcmd stop apppool and a post-build step that runs appcmd start apppool with the corresponding / argument.

Alternatively you could consider removing <hostingEnvironment> from your local web.config and putting your bin folder in the PATH system environment variable, but that will be problematic if you have multiple web apps that depend on different builds of the native DLLs.

Posted by Ed Ball on May 14, 2013

Starting Asynchronous Work Using Tasks

As multi-core processors are quickly becoming ubiquitous, it becomes increasingly important to use parallel and asynchronous programming techniques to create responsive, high-performance applications. The latest .NET releases have responded to this need by introducing the Task Parallel Library (TPL) in .NET 4, and the async/await keywords in C# 5.

We have created a set of fast-paced, code-driven videos on asynchronous programming in C# using TPL and async/await, with a focus on the Task-based Asynchronous Pattern (TAP). If you want a concise introduction to Tasks and async/await, these videos are for you! The videos are under 5 minutes each, and are intended to give a quick overview of each subject. The accompanying blog posts have links for further study.

This first video shows how to start asynchronous work using the Task.Run method, which returns a Task or Task<TResult>. The video also shows how to create tasks that are not run on any thread using TaskCompletionSource<TResult>.


</param></param></param></param></param></param><embed src="" allowfullscreen="true" allowscriptaccess="always" bgcolor=#000000 flashvars="customColor=949494&hdUrl%5Bext%5D=flv&hdUrl%5Bheight%5D=720&hdUrl%5Btype%5D=hdflv&" name="wistia_7xs5u7a659_html" style="display:block;height:100%;position:relative;width:100%;" type="application/x-shockwave-flash" wmode="opaque"></embed>

For further reading, see:

Next week’s video: Continuation Tasks.

Posted by Scott Fleischman on May 10, 2013

Building Code at Logos: Build Repositories

As mentioned in my last post (Sharing Code Across Projects), developers work on the head of the master branch “by convention”. This is fine for day-to-day work, but we’d like something a little more rigorous for our continuous integration builds.

For this, we use “build repositories”. A build repository contains a submodule for each repository required to build the project. (In the App1 example, the App1 build repo would have App1, Framework and Utility submodules.) The CI server simply gets the most recent commit on the master branch of the build repo, recursively updates all the submodules, then builds the code.

The problem now: how is the build repository updated? We solve this using a tool we developed named Leeroy. (So named because we use Jenkins as a CI server, and Leeroy starts the Jenkins builds. We weren’t the first ones to think of this.)

Leeroy uses the GitHub API on our GitHub Enterprise instance to watch for changes to the submodules in a build repo. When it detects one, it creates a new commit (again, through the GitHub API) that updates that submodule in the build repo. After committing, it requests the “force build” URL on the Jenkins server to start a build. Jenkins’ standard git plugin updates the code to the current commit in each submodule and builds it.

The benefit is that we now have a permanent record of the code included in each build (by finding the commit in the build repo for that build, then following the submodules). For significant public releases, we also tag the build repo and each of the submodules (for convenience).

We’ve made Leeroy available at GitHub.

Posts in the “Building Code at Logos” series:

Posted by Bradley Grainger on November 19, 2012

Building Code at Logos: Sharing Code Across Projects

We often have common code that we’d like to share across different projects. (For example, our Utility library is useful in both the desktop software and in an ASP.NET website.)

One way of sharing code is to place it in its own repository, and add it as a submodule to all repos that need it. But submodules are a bit of a pain to work with on a daily basis (for example, git checkout doesn’t automatically update submodules when switching branches; you have to remember to do this every time, or create an alias).

Submodules also make it difficult to “compose” libraries. For example, App1 and App2 might both use Utility, but they might also both use Framework, a desktop application framework that’s not general-purpose enough to live in Utility, but is in its own repo. If Framework itself uses Utility as a submodule, then the App1 and App2 repos might contain both /ext/Utility and /ext/Framework/ext/Utility. This is a maintenance nightmare.

Our choice at Logos is to clone all necessary repositories as siblings of each other. In the App1 example above, we might have C:\Code\App1, C:\Code\Framework and C:\Code\Utility as independent repos. Dependencies are expressed as relative paths that reference files outside the current repo, e.g., ..\..\..\Utility\src\Utility.csproj. We’ve written a shell script that clones all necessary repos (for a new developer) or updates all subfolders of C:\Code (to get the latest code).

By convention, developers are working on the master branch on each repo (or possibly a feature branch in one or more repos for a complex feature). It’s theoretically possible for someone to push a breaking change to Utility and forget to push the corresponding change to App1 (a problem that submodules do prevent), but this happens very infrequently.

Posts in the “Building Code at Logos” series:

Posted by Bradley Grainger on November 17, 2012

Building Code at Logos: Third-Party Repositories

Some of our repositories reference third-party code. In many cases, this can be managed using NuGet, but sometimes we need to make private modifications and build from source.

We accomplish this by creating a repository for the third-party code. In some cases, this repository is added as a submodule under ext in the repositories that need it; in other cases, the binaries created from the code are committed to another repository’s lib folder. The decision depends on how complicated it is to build the code versus how useful it is for developers to have the source (and not just precompiled binaries).

In the third-party repository, the upstream branch contains the unmodified upstream code, while the master branch contains the Logos-specific modifications.

When cloning the repository locally, the origin remote refers to our repository containing the third-party code. If the original third-party code is available via git, then we add an upstream remote that references the original maintainer’s code.

Example: Creating a ThirdParty repo from source on GitHub

# clone a local copy of the remote third-party repository
git clone
cd ExampleProject

# rename the "origin" remote (created by clone) to "upstream"
git remote rename origin upstream

# add Logos' repo as the "origin" remote
git remote add origin git@git:ThirdParty/ExampleProject.git

# use this code as the "upstream" branch
git checkout -b upstream
git push origin upstream

# work on Logos-specific modifications
git checkout master

** make modifications

git commit -am "Some important changes."

# push the changes to our repo
git push origin master

Example: Creating a ThirdParty repo from source in Subversion

# create the git repo
mkdir ExampleProject
cd ExampleProject
git init

# add Logos' repo as the "origin" remote
git remote add origin git@git:ThirdParty/ExampleProject.git

# seed it with the upstream code
svn export --force .

# add all the code
git add -A
git commit -m "Add Example 1.0"

# use this code as the "upstream" branch
git checkout -b upstream
git push origin upstream

# work on Logos-specific modifications
git checkout master

** make modifications

git commit -am "Some important changes."

# push the changes to our repo
git push origin master

Once the repository is created, we will want to update it with new versions of the third-party code when they are released (then merge in our changes).

The new code gets committed to the upstream branch, then that gets merged into master. If necessary, conflicts are resolved, or our changes are edited/removed to reflect changes in the upstream code.

Example: Updating a ThirdParty repository from source on GitHub

# switch to the "upstream" branch, which contains the latest external code
git checkout upstream

# get the latest code from the "master" branch in the "upstream" repo
git pull upstream master

# switch to our local "master" branch, which contains Logos changes
git checkout master

# merge in the latest upstream code
git merge upstream

** fix any conflicts, and commit if necessary

# push the latest merged code to our repo
git push origin master

Example: Updating a ThirdParty repository from source in Subversion

# switch to the "upstream" branch, which contains the latest external code
git checkout upstream

** delete all files in the working copy, except the '.git' directory

# get the latest version of the third-party code
svn export --force .

# add all files in the working copy, then commit them
git add -A
git commit -m "Update to Example 1.1."

# switch to our local "master" branch, which contains Logos changes
git checkout master

# merge in the latest upstream code
git merge upstream

** fix any conflicts, and commit if necessary

# push the latest merged code to our repo
git push origin master

Posts in the “Building Code at Logos” series:

Posted by Bradley Grainger on November 16, 2012