Instance Pudding

Dan Buch,

Tanita Pudding!

This is a very exciting time for infrastructure and automation at Travis CI. As you may have noticed in a recent blog post, we've begun moving some of our Linux build capacity to EC2. With this change come new challenges and opportunities, and we've been busy building up tooling around how we interact with the infrastructure.

the importance of chat

Before delving too much into what we've built, it's worth mentioning that we like to talk to each other. A lot. I'm personally very thankful for this as a remote employee, since it means I'm able to lurk in our various Slack channels or read through the backlog and have a decent idea of what's going on. In addition to conversations my (amazing!) teammates are having, I get to see activities from GitHub, Twitter, HelpScout, PagerDuty, Librato, and Papertrail, among others. This is all part of our belief in the power of chatops.

Those familiar with chatops will know that it's not all about having alerts and such coming into the stream of conversation. Executing tasks from chat instead of context switching to a console and reporting that you've run some commands is important both for visibility and knowledge sharing, not to mention being incredibly handy when all you've got is your phone.

In preparation for moving Linux builds onto the EC2 infrastructure, we decided that the existing tools for managing EC2 instances simply weren't going to cut it. The command line tools had to be cloned down and configured, and the docs and help system were next to nonexistent. Rather than making the command line tooling setup and usage more user-friendly, we decided to instead put effort into rebuilding the functionality in the form of an API.

the start of pudding

The first draft of what is now pudding started out as a port of the command line scripts we'd been using to manage our EC2 instances. The basic steps were to create a security group, create an instance with the security group, wait until the instance was running, then use an SSH connection to upload some last-mile configuration. In the process of porting to pudding, we ended up switching from waiting for SSH to instead using cloud init to perform such configuration.

As we built the pudding API, we were also building a hubot script which has now been extracted into the hubot-pudding repo. Being able to get feedback on our EC2 estate in chat became addictive, and we started adding Slack integrations to pudding. In its current incarnation, starting an instance via pudding results in notifications from the initial creation request, instance start, and at the completion of cloud init running the user data script.

hubot pudding usage

what's next

It's been very handy to be able to manage our EC2 capacity via chat, but we know that we can do much better. For starters, the process of starting and terminating individual instances has not scaled well. We also have the problem of operating well under capacity for easily 75% of a given weekday, and all weekend long. The seemingly obvious solution to this is to use EC2 autoscaling groups. Unfortunately, the nature of our workload (running everyone's tests!) is not a great match for the default autoscaling mode of operation in which instances are terminated immediately when scaling in.

What we're working to add next is a concept of instance pools in pudding, borrowing concepts from autoscaling groups when it makes sense. By rolling our own solution (which we don't do lightly), we'll have the level of control we need to ensure no build jobs are killed mid-run when we're scaling in. We're also planning to add more commands to the hubot script for providing things like a summary of all EC2 resources and re-provisioning all instances in a pool with a newly-baked AMI.

We're also planning to make the pudding codebase even less Travis-specific, as there are currently several bits in there that only exist because of how travis-worker is deployed and configured. This concern isn't only about shipping an open source project that may be useful to others; it's about taking care to prevent concept leakage, which so often results in unexpected behavior and maintenance issues (citation needed).

pudding heroku button

In the interest of being able to try out pudding for yourself, and so that we can easily re-provision it if needed, we've even added a heroku button!

Give pudding a shot, or just look at the source code, and please provide feedback. Happy provisioning!

pudding terminate instance


Introducing Travis CI Enterprise

Mathias Meyer's Gravatar Mathias Meyer,

Today we're excited to ship and launch Travis CI Enterprise, a self-hosted version of Travis CI that runs inside your datacenter.

Over the past months and years, we've been approached many a times by companies wanting to run Travis CI on their own premises, utilizing their internal GitHub Enterprise installations.

Travis CI Enterprise supports GitHub.com and GitHub Enterprise, allowing you to bring all the features that make Travis CI great into your own datacenter, making it easy to scale out build infrastructure based on your company's needs.

Runs on your infrastructure

With Travis CI Enterprise, you can fully utilize internal and existing infrastructure. Whether you're using OpenStack, VMware, or bare metal servers.

It's optimized to run on EC2 as well, and it's fully based on the new Docker build stack that we shipped earlier this week.

Running on EC2 allows you to scale out capacity and save costs based on demand throughout the day or week.

Customize the build environment to your needs

With Travis CI Enterprise, you can customize the build environment to reflect your specific needs. Whatever services and language versions you need to have installed by default, thanks to Docker, you can provide your own images, speeding up builds significantly, removing the need for any customization and slowdown during the build.

Security

Just like hosted Travis CI, our Enterprise version integrates with your existing GitHub instance. You can easily use it with LDAP directories, as login, authentication and authorization are strictly tied to the users you already have set up in GitHub Enterprise.

Pricing

The licensing is done per seats, where every license includes 20 users. Pricing starts at $6,000 per license, which includes 20 users and 5 concurrent builds. There's a premium option with unlimited builds for $8,500.

How can I try it out?

Send us an email, and we'll get you set up with a trial.

Question and Answers

Does Travis CI Enterprise support Stash, Gitorious, or any other version control system?

Travis CI Enterprise focuses on a tight integration with GitHub and GitHub Enterprise. While we don't have immediate plans to support other platforms, do get in touch if you have specific needs, as that gives us clues for our product roadmap.

How is it installed?

We ship you a set of two images, one containing the base installation, and another one that includes the job worker, which you can install on the machines you want to dedicate to running builds.

Can I provide my own Docker image?

Yes, you can. Travis CI Enterprise fully supports bringing your own set of images so you can tailor the build environment to suit your needs.

Can I dedicate more resources to my Docker containers?

By default, the containers run with 2 cores and 4 GB of memory. While that can't be customized just yet, it's something we're looking into adding in the future.

Want to try out Travis CI Enterprise on your infrastructure or on EC2? Get in touch, and we'll get you set up!.


A pudding loving Dan Buch joins the Travis CI Team

Josh Kalderimis's Gravatar Josh Kalderimis,

We are very pleased to shout from the roof tops that Dan Buch, an avid Travis CI open source contributor, pudding lover, and connoisseur of hats made from meat, has joined the Travis CI team!

Dan hails from Pittsburgh, the city of a hundred bridges and the Primanti Bros, a sandwich filled with so much awesome, including a weeks daily pastrami intake.

In such a short amount of time he has already had a huge impact on everything Travis CI, especially with helping us make our infrastructure awesome, all with the power of chatops!

In fact, Dans awesomeness transcends the augmented scaling power of the synergized cloud, that is why we are even more excited to announce that he will be our infrastructure lead!

If you are around the Pittsburgh area, please give Dan a hug and High 5, otherwise send him a tweet or ten :)

Welcome to the team Dan!