Presentation: Testing Programmable Infrastructure with Ruby

Location:

Duration

Duration: 
11:50am - 12:40pm

Day of week:

Level:

Persona:

Key Takeaways

  • Understand that you can - and probably should - test your programmable infrastructure!
  • Learn about the benefits of testing your programmable infrastructure - tighter feedback cycle, fewer bugs, de-risking your deployments
  • Understand DevOps requires taking testing seriously

Abstract

With the rise of DevOps, programmable infrastructure is reaching widespread adoption. However, although automated testing of software is becoming ever more common, the same cannot be said with testing the target deployment environment itself. With microservices making our deployments more and more complex, we can no longer afford to ignore this type of testing. This talk will take a tour through some approaches to environment infrastructure testing that we have created using Ruby, a language we feel is uniquely positioned to work with both infrastructure and testing.

Interview

Question: 
What is the focus of your work today?
Answer: 

I work as a QA consultant for OpenCredo, which broadly means I deal with all elements of QA with our client engagements. More specifically, I manage the automated testing of highly complex systems - microservices, highly scalable applications, automated infrastructure, big data, you name it. OpenCredo are often well ahead of the curve in adopting new technology, which means my job testing it is very interesting!

Question: 
What’s the motivation for your talk?
Answer: 

For the past 9 months, I’ve been working with a client building a cloud brokerage system - one that abstracts away the provisioning of infrastructure to our client, and allows them to swap between AWS, Azure, etc, quite easily. We’re using a modern stack of devops tools such as Terraform, Ansible, Vault to create and maintain these environments, so we’re in a great position from a dev side. It raised some interesting questions from a functional test perspective, though - if our system’s function was to create and configure infrastructure, well, how do we test that all that?

Our solution was to create a separate functional test suite of infrastructure tests, alongside all our other testing frameworks. It worked pretty well, but we noticed in the meantime that this seems a pretty rare thing to do. Which is odd, because our infrastructure is code - why on earth aren’t we also testing it? It seems that potentially we’re developing a testing blind spot.

Question: 
How you you describe the persona of the target audience of this talk?
Answer: 

Anyone that’s interested in programmable infrastructure, which should be anyone currently using devops practices, or trying to implement them.

Question: 
How would you rate the level of this talk?
Answer: 

It’s reasonably generalist. There are some specifics in terms of our implementation, but I don’t want to get bogged down in the details - how you test your infrastructure most likely depends upon what your infrastructure is.

Question: 
QCon targets advanced architects and sr development leads, what do you feel will be the actionable that type of persona will walk away from your talk with?
Answer: 

They should be examining their infrastructure and develop some sort of strategy for how it should be tested. It doesn’t have to be a formal strategy - but they should consider the risks and benefits of changing their approach.

Question: 
What do you feel is the most disruptive tech in IT right now?
Answer: 

Not related to my talk, but machine learning is something that we should all be taking very seriously. Almost every tech firm can improve their current business by incorporating some level of machine learning. Uber could predict where customers might want to be picked up from. McDonalds might want to predict busier or slower hours and adjust staff hours accordingly. Tesco could predict when certain products are going to sell better, and who to sell them to. We run a machine learning bot in the OpenCredo Slack to predict foosball scores - it’s often more accurate than we are!

Speaker: Matt Long

Dev-in-test @OpenCredo

Matt is a QA Consultant for OpenCredo, a London-based consultancy specializing in helping clients build and deploy emerging technologies. He is responsible for the testing requirements in a number of OpenCredo engagements, with specialist knowledge in the creation and deployment of automated testing frameworks. Matt works with tools such as Java, Selenium, Cucumber, Ruby, and Gatling. He builds and maintains a machine learning foosball bot in his spare time.

Find Matt Long at

Similar Talks

Tracks

Conference for Professional Software Developers