I recommend always having an AutoScaling group for your instances even if you do not want to automatically scale. Some advantages include:
- Aggregate CloudWatch metrics across your instances.
- Automatically remove and insert instances to one or more ELB's using LoadBalancerNames.
- Keeping instance tags consistent. If you configure the PropogateAtLaunch property to true, each new instance will be automatically tagged.
- Easily pull out an instance (detach) for debugging or development.
- Group your servers for CodeDeploy and new instances will automatically be included.
- Automatic redundancy enforcement. Anytime an instance is deleted by you, a coworker, or magic fairies, it is replaced with a new one.
- Testing deployments and instance configuration. Just terminate an instance and wait for the new one to get created.
- Use as a deployment mechanism for immutable instance deployments.
Peter Dalinis
Wednesday, November 11, 2015
Tuesday, July 14, 2015
Many git projects? Save time with mu-repo!
Just a quick time saving tip for when you are working with many interdependent git projects.
mu-repo will perform git operations against a number of projects at once. Think of it as a git command multiplexer.
Install with "pip install mu-repo", and verify it is in your path.
Next, setup a directory like ~/work, or ~/go/src/github.com/pdalinis and clone down all the repositories.
On that parent, do a quick "mu register --all" to add all the child repositories to my .mu_repo file.
Now I can do "mu pull origin master" to update, and "mu st" to view a nice rollup status.
Adding and removing projects from the .mu_repo file is as easy as "mu register <repo>" and "mu unregister <repo>".
It even has excellent documentation when you run just "mu".
mu-repo will perform git operations against a number of projects at once. Think of it as a git command multiplexer.
Install with "pip install mu-repo", and verify it is in your path.
Next, setup a directory like ~/work, or ~/go/src/github.com/pdalinis and clone down all the repositories.
On that parent, do a quick "mu register --all" to add all the child repositories to my .mu_repo file.
Now I can do "mu pull origin master" to update, and "mu st" to view a nice rollup status.
Adding and removing projects from the .mu_repo file is as easy as "mu register <repo>" and "mu unregister <repo>".
It even has excellent documentation when you run just "mu".
Monday, July 13, 2015
Upstart Configuration Generator for Go
In the past I have talked about how great it is to use technologies that simplify everyone's lives.
One of my favorite productivity boosts was moving to Go because I was able to eliminate Chef. With Go, there are no dependencies to install, and almost nothing to configure on the target server.
One thing you will want is a program to supervise your Go program. If it crashes or the computer reboots, the supervisor will start it up again.
One really great program for this is Upstart. It is lightweight, easy to interact with, and is installed on most Linux distributions by default.
While you could use Chef, I feel it is overboard for this purpose. You could easily write a quick shell scrip to create it, or put it in your userdata. All of these approaches are not very optimal or DRY when you are shipping many services.
So I created a new package called upstartConfig. It will generate an Upstart configuration file for the executing program. Just import the package, add a command-line flag to run the code within, and you'll have a nice new Upstart configuration file.
Take it for a spin and let me know how it works!
One of my favorite productivity boosts was moving to Go because I was able to eliminate Chef. With Go, there are no dependencies to install, and almost nothing to configure on the target server.
One thing you will want is a program to supervise your Go program. If it crashes or the computer reboots, the supervisor will start it up again.
One really great program for this is Upstart. It is lightweight, easy to interact with, and is installed on most Linux distributions by default.
While you could use Chef, I feel it is overboard for this purpose. You could easily write a quick shell scrip to create it, or put it in your userdata. All of these approaches are not very optimal or DRY when you are shipping many services.
So I created a new package called upstartConfig. It will generate an Upstart configuration file for the executing program. Just import the package, add a command-line flag to run the code within, and you'll have a nice new Upstart configuration file.
Take it for a spin and let me know how it works!
Labels:
golang,
microservices,
upstart
Tuesday, June 16, 2015
Papertrail Logging Naming with AWS Autoscaling Groups
Off-box logging is vital to everyone's sanity and success when in AWS.
Using Papertrail has been painless and the features are great! There are a many different ways to integrate with them, and forwarding your syslog to them is a good first step.
The problem I ran into was that my logs were showing up by hostname. In AWS, a hostname of "ip-nnn-nnn-nnn-nnn" is not very clear. What application does that instance belong to? Is that instance part of staging or production? Add another complication - AutoScaling. Now when an instance comes online and goes away after a few hours, it gets even more difficult to understand what is occurring when you look at the logs.
In my UserData, in the CloudFormation Template, I just added these simple bash lines to change the hostname to something meaningful and dynamic:
The CloudFormation Template has two input parameters (AppName and Environment) and uses a Join to put the hostname variable together.
So now the hostname will be <ApplicationName>-<Environment>/<InstanceID>.
Back in Papertrail, I can now group these systems using wildcards.
Using Papertrail has been painless and the features are great! There are a many different ways to integrate with them, and forwarding your syslog to them is a good first step.
The problem I ran into was that my logs were showing up by hostname. In AWS, a hostname of "ip-nnn-nnn-nnn-nnn" is not very clear. What application does that instance belong to? Is that instance part of staging or production? Add another complication - AutoScaling. Now when an instance comes online and goes away after a few hours, it gets even more difficult to understand what is occurring when you look at the logs.
In my UserData, in the CloudFormation Template, I just added these simple bash lines to change the hostname to something meaningful and dynamic:
The CloudFormation Template has two input parameters (AppName and Environment) and uses a Join to put the hostname variable together.
So now the hostname will be <ApplicationName>-<Environment>/<InstanceID>.
Back in Papertrail, I can now group these systems using wildcards.
Labels:
AWS,
CloudFormation
Thursday, February 12, 2015
Golang with Atlassian Stash
Overall, I am unimpressed with Atlassian Stash. It is missing many of the features that are available on Github, is not open source, and generally feels about 3 years behind the curve.
Using it with go get, we ran into some strange issues both with Stash and go get.
Here are a few links that you can upvote to help Atlassian prioritize native go support:
https://answers.atlassian.com/questions/318264/does-stash-have-go-support
https://answers.atlassian.com/questions/329277/making-git-suffix-for-repository-urls-optional
In the meantime, we got things to kind of work, for the most part...
1. Use .git as a suffix to all your projects and imports. Yes, this makes everything ugly, but by naming logging.git it lets go get know that the repo is using git. If you use gox, you can use the -output flag to name the build output without the .git extension.
2. Add the following redirect in your .gitconfig - it will force go get to use ssh and save a lot of time by not waiting for go get to timeout:
I hope this helps!
Using it with go get, we ran into some strange issues both with Stash and go get.
Here are a few links that you can upvote to help Atlassian prioritize native go support:
https://answers.atlassian.com/questions/318264/does-stash-have-go-support
https://answers.atlassian.com/questions/329277/making-git-suffix-for-repository-urls-optional
In the meantime, we got things to kind of work, for the most part...
1. Use .git as a suffix to all your projects and imports. Yes, this makes everything ugly, but by naming logging.git it lets go get know that the repo is using git. If you use gox, you can use the -output flag to name the build output without the .git extension.
2. Add the following redirect in your .gitconfig - it will force go get to use ssh and save a lot of time by not waiting for go get to timeout:
[url "git+ssh://git.foo.net:"]3. Use the -f flag with -u "go get -v -u -f" to update your project.
insteadOf = https://git.foo.net/
insteadOf = http://git.foo.net/
insteadOf = git://git.foo.net
I hope this helps!
Monday, February 2, 2015
All the things your microservices should do (Part 2)
Health, Version, and CLI are talked about in Part 1.
We send all our metrics to CloudWatch Metrics. The last 10 metrics are also visible by viewing the health page.
You could use an agent to send logs and metrics, but that complicates the installation and configuration process, and introduces new technologies into, what should be, a very minimalistic stack.
Logging
The services never write anything to disk. With immutable servers, our servers are deleted with every push, so anything written to disk is also deleted.
For CLI and local testing, the logger reverts to using stdout. For staging and production we ship our logs to CloudWatch Logging.
One great thing about this is that we can easily integrate monitoring with the log events to get notified when errors start coming in.
The last 10 log entries are also visible by viewing the health page.
For CLI and local testing, the logger reverts to using stdout. For staging and production we ship our logs to CloudWatch Logging.
One great thing about this is that we can easily integrate monitoring with the log events to get notified when errors start coming in.
The last 10 log entries are also visible by viewing the health page.
Metrics
The services directly post their metrics, and these include memory, cpu, disk, and anything else you are interested in. Monitoring then can bet set to consume these metrics and alert you when things start moving in the wrong direction.We send all our metrics to CloudWatch Metrics. The last 10 metrics are also visible by viewing the health page.
You could use an agent to send logs and metrics, but that complicates the installation and configuration process, and introduces new technologies into, what should be, a very minimalistic stack.
Profiling
Memory leaks and performance bottlenecks happen. When they do, it is important to be able to find them quickly. All our services run the net/http/pprof package so that we can remotely troubleshoot any issues.Init
With golang, you do not have to install any dependencies or any programs. When our server boots up, it copies the single compiled file onto itself, and runs it with the -init parameter. This is done using a user data script and s3 for storage.
Like version, it is a contract between our services and our continuous delivery system. For most of the services, the init command just generates an upstart configuration file, but it can be used to do many different things.
Configuration
We do not use configuration files, they are a pain to deal with, track, and get right. You'll be too tempted to just SSH into the server and change the file instead of going through your CI/CD process.
Our dependencies are detected from various areas such as our Cloud Formation outputs, and service locators. One example is the CloudWatch Log Stream which is dynamically named by AWS. On start, the code will look up the output values from the CloudFormation outputs and configure the logging system accordingly.
Others are just hard-coded. It takes about 3 mins to change a hard-coded value and push the change to production. Not a big deal, and well worth the benefits of having such a simple deployment process.
We use IAM roles and EC2 security groups for authentication and network connectivity and haven't needed to store any passwords (yet...)
Labels:
golang,
microservices
Wednesday, December 3, 2014
All the things your microservices should do (Part 1)
I like to think of a Microservices environment to be pretty ideal. You deliver small concise pieces of functionality for others to pick and choose. Your customers can take a look around and put together another service or application very quickly. You maintain a suite of services, and create new ones when others aren't used anymore.
When writing and maintaining many small services, there are some extra things you might want to consider to keep your life sane.
Imagine a scenario of owning 8-12 services in a small team of 3, which I do not think is unreasonable.
I have started doing just this, and have created some really useful and helpful pieces of functionality to make it maintainable.
In my next few blog posts, I'll talk about what it is I have implemented and how it makes the above scenario manageable.
Version
Every service I write has a GET endpoint that returns the version of the service (note - not the version of the contract, this is different). The version number consists of a standard dev-maintaned major.minor. I follow that by the Git SHA, followed by if there are any pending Git changes (this will make more sense in future posts when I talk about CD).So by looking at the /version endpoint, I might see a response of
{ "Version": "1.1.ab41cd" }
or if I have pending changes:
{ "Version": "1.1.ab41cd+CHANGES" }
When using Golang, you can pass variable values during build time by passing it as a flag. I learned this versioning tip from reading the source of terraform.io.
Command Line Interface
Since microservices are so small, it is nice to provide their functionality not only as a hosted service, but also as a command line program. All the functionality of the service also works with command line options, and one of the most important is showing the version (-v).By executing our service with the -v flag, it returns the version number which can be used by the build/deployment system, and by humans trying to troubleshoot issues.
The CLI can be used by both customers and automation, it is especially great for those "beginning of time" problems. Our deployment process depends on a service, and without the CLI, we would have to deploy the first time manually.
Health
Health information is very important as it is used by load balancers, deployment, monitoring, and as a troubleshooting tool. All in-process and external connections should be included in your check - databases, caches, other API's, logging, metrics, etc. Your health check should be low overhead, never fail, and return quickly as it will be called often.Health requests should include a nice quick roll-up status, along with any details on each dependency.
I have an interface that my dependencies implement to return this information:
IsHealthy() (bool, error, []string)
In addition, I like my health response to include the last few error log entries, and metrics. This helps me troubleshoot any issues without having to look on the machine. It even shows me dynamic configuration values that are queried when the service starts in AWS. It is a huge helper for debugging issues.
Labels:
golang,
microservices
Subscribe to:
Posts (Atom)