Last week PuppetConf took place at the Fairmont in San Francisco, gathering Puppet users and enthusiasts from all over the world for five days of training, development and sessions.
MaestroDev was present at the event as we are heavy Puppet users, and contributors! We are currently the 3rd most frequent contributor to Puppet modules in the Puppet Forge and publish 30 modules, of the 50 that we use on a day to day basis.
Our architect Carlos Sanchez gave a presentation on How to Develop Puppet Modules: From Source to the Forge With Zero Clicks, showing how to apply an automatic build-test-release-deploy cycle for Puppet Modules, because, when you use Infrastructure-as-Code you have to apply the development best practices to your infrastructure. In a demo, he showed how we use Maestro to automatically build, test, release and deploy our Puppet modules to the Puppet Forge, triggered on each commit for a truly Continuous Delivery experience.
And, not only do we use Puppet to provide module deployment to the Forge, we also integrate Maestro with Puppet and Puppet Enterprise to automate Puppet updates across your Puppet agents. When you put these capabilities together with other development automation capabilities you achieve Continuous Delivery for both your applications and infrastructure.
Imagine you are building or releasing the latest version of your software and need to propagate an update through all the Puppet agents running, as well as any updates to config files or packages managed by Puppet. Instead of waiting for the next Puppet run to happen, Maestro can automatically deploy the Puppet manifests and modules to the Puppet master after being tested, and propagate those changes as well as the latest application built to all or a subset of the servers running with Puppet agents.
Thanks to all the Puppet Labs guys for a great event, and especially to Ryan Coleman for the work he is doing on the Forge and his help getting these Maestro integrations working. We look forward to seeing everyone at the next event!
US Department of Defense Expands Partnership with MaestroDev and Apelon to Improve Clinical Data Interoperability
Our ongoing work with the Veterans Health Administration has expanded to include the Department of Defense to help exchange clinical data from existing electronic medical record systems. The focus of the first deliverable is the exchange of patient allergy information.
The partnership between Apelon and MaestroDev is enabling use of the International Health Terminology Standards Development Organization (IHTSDO) Workbench — an important open source tool for the development of standardized medical terminology — by the two Federal Agencies.
By streamlining the exchange of clinical data between electronic medical record systems, MaestroDev and Apelon are making it easier for DoD and VA teams to share critical healthcare information. This will help eliminate confusion and ensure that patients receive the right diagnosis and care, based on consistent medical definitions, across DoD bases worldwide. Early stage implementations of the new data sharing and modeling capabilities are focusing on the exchange of patient allergy information.
Through the partnership, MaestroDev is providing the DoD with development and operations automation, including a platform on which to build, test, and release software. Apelon is contributing products and expertise in terminology and data standardization.
A Deeper Dive on the Solution
Apelon and MaestroDev are configuring the Workbench to manage each Agency’s local terminology along with industry standards including SNOMED CT, ICD-9-CM, CPT, LOINC, RxNorm, and others. The synergistic integration of Apelon’s terminology experience and MaestroDev’s DevOps Orchestration platform allows multiple users to complete the necessary workflows enabling frictionless contribution of new content to the U.S. extension of SNOMED CT and the rest of the IHTSDO community.
This project leverages each company’s experience with open source technologies to achieve efficiency and quality. Together, MaestroDev and Apelon deploy a particularly effective system for modeling and distributing medical terminologies.
Big news for teams developing and deploying applications in cloud environments! We have integrated Maestro with RightScale, a recognized leader in enterprise multi-cloud management, to provide automated deployment for development teams.
The combination of Maestro and RightScale provides an end-to-end solution that optimizes the delivery of applications across any combination of physical servers, virtualized environments, or public/private clouds. With Maestro Compositions and the RightScale plugin, you can now completely automate the deployment of your products into RightScale environments.
And, Maestro itself is available as a RightScale server template, so that you can instantly scale out your development, test, and deployment infrastructure in the cloud.
The new RightScale plugin has the following features:
Start and stop RightScale servers from templates
Retrieve RightScale server information to be used in other steps of your workflow, and track as part of the project history
Pause a composition until a RightScale server is in the desired state
Start and stop complete RightScale deployments
Execute a RightScript on a particular server
With these features, MaestroDev and RightScale users will now be able to:
Construct sophisticated deployment methods using templates and best practices
Deploy applications to multiple clouds and stacks automatically from source code
Manage end-to-end delivery across multiple environments
Easily establish a robust development and test infrastructure within your cloud environments
Scale development and test infrastructure automatically and consistently
In summary, the MaestroDev and RightScale solution is a significant jump forward for any development team looking to streamline and accelerate public, private, and hybrid cloud deployments.
Try Maestro for free at maestrodev.com.
Apelon and MaestroDev Partner to Improve Healthcare Terminologies at VHA.
Apelon and MaestroDev are teaming up to help the Veterans Health Administration develop a new open source terminology management environment. The collaboration will allow the VHA to extend its use of the International Health Terminology Standards Development Organization (IHTSDO) Workbench, an important emergent open source terminology development tool.
Through the partnership MaestroDev is providing development and operations automation, including an open source tools-based platform on which to build, test and release software. Apelon is providing products and consulting expertise in terminology and data standardization.
The joint solution will allow the VHA to configure Workbench to manage local, national, and international terminology standards across VA hospitals nationwide. The integration of Apelon’s terminology experience and MaestroDev’s DevOps Orchestration engine offers an ideal approach to streamlining workflows and enhancing productivity in a terminology project of this scale.
The MaestroDev team has extensive experience with Workbench projects. This includes successful engagements with Kaiser Permanente, the Australian National eHealth Transition Authority, and IHTSDO.
By automating best practices from DevOps and agile methodologies the MaestroDev team can provide Continuous Delivery of the Workbench software. This helps drive the internal terminology standardization program at VHA, improves the exchange and analysis of veterans’ health information, and improves the VHA’s interactions with community partners.
This is a copy of the guest post on the Flowdock blog. You can read the original here.
MaestroDev is a proud Flowdock customer. Since we began using it early in the year, we have greatly improved the internal visibility of development progress, and streamlined our methods of communication – reducing the number of redundant calls and emails.
The MaestroDev product development team is globally distributed, covering 4 different timezones. Our Flow is active 24 hours a day with development information and tagged updates for each other. Whether they work face to face, or remotely, Flowdock puts all of our team members on an equal footing, catching up on important discussions as they start their day, and leaving notes about progress for team members whom they may not otherwise be able to meet with immediately.
We have developed Maestro, our enterprise-grade DevOps Orchestration engine, to help enable all members of a software delivery team to be more efficient and collaborative. Maestro introduces Compositions, a reusable definition of a sequence of tools, processes and infrastructure that can be automated and interacted with. Compositions encapsulate best practices and encourage consistency across projects, reducing ramp up time and silos of expertise about infrastructure. Maestro is built to take advantage of modern public and private cloud technology to dynamically scale build, test and deployment infrastructure. This reduces friction between development, QA and operations team members and reduces the wait time for necessary infrastructure. Finally, Compositions and their execution output provide a single source of truth and history about a variety of systems, where team members can keep up to date, participate in decision points, and gather feedback from integrated tools to determine future improvements.
Integrating Flowdock and Maestro for Delivery Visibility
As you can see, Flowdock complements Maestro as a dedicated information flow for communication, notifications and actions. For this reason, we have developed Flowdock integration for Maestro and incorporated it into our delivery workflows.
Maestro has integration for a number of different tools available, and at MaestroDev some that we use are:
- JIRA: issue tracking and sprint planning
- GitHub: source control
- Jenkins: continuous integration and automated builds
- Apache Archiva: build artifact management
- Vagrant and VirtualBox: virtual machine for testing and delivery
- Puppet: infrastructure configuration management
With these tools orchestrated by Maestro and information streaming to Flowdock, we’re able to track a change from a JIRA ticket and a commit, through its deployment on a preview instance, automated functional tests and a complete candidate virtual machine image for distribution.
Our primary automated workflow looks like this:
- A commit at GitHub triggers a notification to Flowdock, and triggers a Composition to start the rest of the process
- Maestro ensures a suitable Jenkins job is executed to build the project and publish to the artifact repository.
- Flowdock is notified in the event of success (showing the published RPM version and build number) or failure (showing the full output and error that occurred)
- If it was successful, Maestro concurrently starts Compositions to update the preview instance, and run functional tests
- For the preview instance, we update the RPM version in the Puppet manifest, and trigger a Puppet agent run on the host. Puppet reports back to Flowdock when it is complete, and we know the preview instance is updated with the change
- For functional tests, a virtual machine is started with Vagrant, provisioned with Puppet, and then tests are run via Jenkins using Cucumber and Capybara. If any fail, a notification is sent to the main Flow.
- If the functional tests are successful, then a new virtual machine is produced and a notification sent to the main Flow.
Of course, we have many other such Compositions for sequences including releases, building and deploying Puppet modules, publishing promoted VMs to Amazon S3, and so on – all similarly integrated with Flowdock.
If you’re interested in trying Maestro out for yourself, contact us and we’ll set you up with an evaluation system and several similar pre-configured examples.
Flowdock has become the first thing I check in the morning, and one of the most useful tools I turn to throughout the day to find out what is happening, discuss a solution with a colleague, or just share a link to something fun and interesting. Our thanks go to the Flowdock team for a great product!
Who is John Galt?
As DevOps continues to be defined and debated, what is undeniable is that change is hard — as John Galt tried to show her, and as Dagny Taggart eventually conceded in Ayn Rand’s Atlas Shrugged — and especially so for large and distributed enterprise teams. DevOps evangelists are passionate about benefits and critics challenge origins, but the end of the DevOps discussion is the same; if we agree that more communication between developers and IT operations produces better software, then the discussion should focus on “How?”
As MaestroDev works with our customers and prospects, one consistent request of both our consulting work and our Maestro product is how to facilitate the best practices that are discussed in blogs, lunches, and sprints. So there is the question; How do we implement the DevOps ideas we have?
The answer is what we call DevOps Orchestration. Automating tasks and tools that span the DevOps lifecycle are a first step. An extension of Continuous Integration and Delivery, it describes the automation of tasks insofar as they are set up the first time. But DevOps Orchestration also prescribes the “How” of daily interaction between groups. What is needed is a common location and mechanism to communicate and share the very best practices that DevOps promotes, and their unique details for each task, team, and project.
Maestro excels at DevOps Orchestration by providing a central location for this bi-directional communication and abstracting the logic away from the tools themselves with a tool-agnostic task-based mechanism which captures the “language” of DevOps interactions between developer, testing, and operations groups, and even allows manager approvals to interact with the automated processes. DevOps Orchestration accelerates the pace of change, fosters best practices, and removes friction from the best intentions of teams who, ultimately, agree on the same end-goals.
Maestro DevOps Orchestration automates your existing installed toolset from source code management through build, test, deploy, and environment management — all from a single screen. Ease-of-use gains alone promote DevOps adoption by making it easier to capture task details of all types in a single location for the author, but also because all other users know there is a single place to find / review / change task definitions and configurations. Great things can start from a single place.