AZ 400T00A ENU TrainerHandbook
AZ 400T00A ENU TrainerHandbook
Official
Course
AZ-400T00
Designing and
Implementing Microsoft
DevOps solutions
AZ-400T00
Designing and Implementing
Microsoft DevOps solutions
II Disclaimer
Information in this document, including URL and other Internet Web site references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.
The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is
not responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained
therein.
© 2019 Microsoft Corporation. All rights reserved.
Microsoft and the trademarks listed at http://www.microsoft.com/trademarks 1are trademarks of the
Microsoft group of companies. All other trademarks are property of their respective owners.
1 http://www.microsoft.com/trademarks
EULA III
13. “Personal Device” means one (1) personal computer, device, workstation or other digital electronic
device that you personally own or control that meets or exceeds the hardware level specified for
the particular Microsoft Instructor-Led Courseware.
14. “Private Training Session” means the instructor-led training classes provided by MPN Members for
corporate customers to teach a predefined learning objective using Microsoft Instructor-Led
Courseware. These classes are not advertised or promoted to the general public and class attend-
ance is restricted to individuals employed by or contracted by the corporate customer.
15. “Trainer” means (i) an academically accredited educator engaged by a Microsoft Imagine Academy
Program Member to teach an Authorized Training Session, (ii) an academically accredited educator
validated as a Microsoft Learn for Educators – Validated Educator, and/or (iii) a MCT.
16. “Trainer Content” means the trainer version of the Microsoft Instructor-Led Courseware and
additional supplemental content designated solely for Trainers’ use to teach a training session
using the Microsoft Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint
presentations, trainer preparation guide, train the trainer materials, Microsoft One Note packs,
classroom setup guide and Pre-release course feedback form. To clarify, Trainer Content does not
include any software, virtual hard disks or virtual machines.
2. USE RIGHTS. The Licensed Content is licensed, not sold. The Licensed Content is licensed on a one
copy per user basis, such that you must acquire a license for each individual that accesses or uses the
Licensed Content.
●● 2.1 Below are five separate sets of use rights. Only one set of rights apply to you.
1. If you are a Microsoft Imagine Academy (MSIA) Program Member:
1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User who is enrolled in the Authorized Training Session, and only immediately
prior to the commencement of the Authorized Training Session that is the subject matter
of the Microsoft Instructor-Led Courseware being provided, or
2. provide one (1) End User with the unique redemption code and instructions on how they
can access one (1) digital version of the Microsoft Instructor-Led Courseware, or
3. provide one (1) Trainer with the unique redemption code and instructions on how they
can access one (1) Trainer Content.
3. For each license you acquire, you must comply with the following:
1. you will only provide access to the Licensed Content to those individuals who have
acquired a valid license to the Licensed Content,
2. you will ensure each End User attending an Authorized Training Session has their own
valid licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the
Authorized Training Session,
3. you will ensure that each End User provided with the hard-copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
EULA V
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject
to the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agree-
ment in a manner that is enforceable under local law prior to their accessing the Micro-
soft Instructor-Led Courseware,
4. you will ensure that each Trainer teaching an Authorized Training Session has their own
valid licensed copy of the Trainer Content that is the subject of the Authorized Training
Session,
5. you will only use qualified Trainers who have in-depth knowledge of and experience with
the Microsoft technology that is the subject of the Microsoft Instructor-Led Courseware
being taught for all your Authorized Training Sessions,
6. you will only deliver a maximum of 15 hours of training per week for each Authorized
Training Session that uses a MOC title, and
7. you acknowledge that Trainers that are not MCTs will not have access to all of the trainer
resources for the Microsoft Instructor-Led Courseware.
2. If you are a Microsoft Learning Competency Member:
1. Each license acquire may only be used to review one (1) copy of the Microsoft Instruc-
tor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Course-
ware is in digital format, you may install one (1) copy on up to three (3) Personal Devices.
You may not install the Microsoft Instructor-Led Courseware on a device you do not own or
control.
2. For each license you acquire on behalf of an End User or MCT, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User attending the Authorized Training Session and only immediately prior to
the commencement of the Authorized Training Session that is the subject matter of the
Microsoft Instructor-Led Courseware provided, or
2. provide one (1) End User attending the Authorized Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) MCT with the unique redemption code and instructions on how
they can access one (1) Trainer Content.
3. For each license you acquire, you must comply with the following:
1. you will only provide access to the Licensed Content to those individuals who have
acquired a valid license to the Licensed Content,
2. you will ensure that each End User attending an Authorized Training Session has their
own valid licensed copy of the Microsoft Instructor-Led Courseware that is the subject of
the Authorized Training Session,
3. you will ensure that each End User provided with a hard-copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject
to the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agree-
ment in a manner that is enforceable under local law prior to their accessing the Micro-
soft Instructor-Led Courseware,
VI EULA
4. you will ensure that each MCT teaching an Authorized Training Session has their own
valid licensed copy of the Trainer Content that is the subject of the Authorized Training
Session,
5. you will only use qualified MCTs who also hold the applicable Microsoft Certification
credential that is the subject of the MOC title being taught for all your Authorized
Training Sessions using MOC,
6. you will only provide access to the Microsoft Instructor-Led Courseware to End Users,
and
7. you will only provide access to the Trainer Content to MCTs.
3. If you are a MPN Member:
1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User attending the Private Training Session, and only immediately prior to the
commencement of the Private Training Session that is the subject matter of the Micro-
soft Instructor-Led Courseware being provided, or
2. provide one (1) End User who is attending the Private Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) Trainer who is teaching the Private Training Session with the
unique redemption code and instructions on how they can access one (1) Trainer
Content.
3. For each license you acquire, you must comply with the following:
1. you will only provide access to the Licensed Content to those individuals who have
acquired a valid license to the Licensed Content,
2. you will ensure that each End User attending an Private Training Session has their own
valid licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the
Private Training Session,
3. you will ensure that each End User provided with a hard copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject
to the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agree-
ment in a manner that is enforceable under local law prior to their accessing the Micro-
soft Instructor-Led Courseware,
4. you will ensure that each Trainer teaching an Private Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Private Training Session,
EULA VII
5. you will only use qualified Trainers who hold the applicable Microsoft Certification
credential that is the subject of the Microsoft Instructor-Led Courseware being taught
for all your Private Training Sessions,
6. you will only use qualified MCTs who hold the applicable Microsoft Certification creden-
tial that is the subject of the MOC title being taught for all your Private Training Sessions
using MOC,
7. you will only provide access to the Microsoft Instructor-Led Courseware to End Users,
and
8. you will only provide access to the Trainer Content to Trainers.
4. If you are an End User:
For each license you acquire, you may use the Microsoft Instructor-Led Courseware solely for
your personal training use. If the Microsoft Instructor-Led Courseware is in digital format, you
may access the Microsoft Instructor-Led Courseware online using the unique redemption code
provided to you by the training provider and install and use one (1) copy of the Microsoft
Instructor-Led Courseware on up to three (3) Personal Devices. You may also print one (1) copy
of the Microsoft Instructor-Led Courseware. You may not install the Microsoft Instructor-Led
Courseware on a device you do not own or control.
5. If you are a Trainer.
1. For each license you acquire, you may install and use one (1) copy of the Trainer Content in
the form provided to you on one (1) Personal Device solely to prepare and deliver an
Authorized Training Session or Private Training Session, and install one (1) additional copy
on another Personal Device as a backup copy, which may be used only to reinstall the
Trainer Content. You may not install or use a copy of the Trainer Content on a device you do
not own or control. You may also print one (1) copy of the Trainer Content solely to prepare
for and deliver an Authorized Training Session or Private Training Session.
2. If you are an MCT, you may customize the written portions of the Trainer Content that are
logically associated with instruction of a training session in accordance with the most recent
version of the MCT agreement.
3. If you elect to exercise the foregoing rights, you agree to comply with the following: (i)
customizations may only be used for teaching Authorized Training Sessions and Private
Training Sessions, and (ii) all customizations will comply with this agreement. For clarity, any
use of “customize” refers only to changing the order of slides and content, and/or not using
all the slides or content, it does not mean changing or modifying any slide or content.
●● 2.2 Separation of Components. The Licensed Content is licensed as a single unit and you
may not separate their components and install them on different devices.
●● 2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights
above, you may not distribute any Licensed Content or any portion thereof (including any permit-
ted modifications) to any third parties without the express written permission of Microsoft.
●● 2.4 Third Party Notices. The Licensed Content may include third party code that Micro-
soft, not the third party, licenses to you under this agreement. Notices, if any, for the third party
code are included for your information only.
●● 2.5 Additional Terms. Some Licensed Content may contain components with additional
terms, conditions, and licenses regarding its use. Any non-conflicting terms in those conditions
and licenses also apply to your use of that respective component and supplements the terms
described in this agreement.
VIII EULA
laws and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property
rights in the Licensed Content.
6. EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regula-
tions. You must comply with all domestic and international export laws and regulations that apply to
the Licensed Content. These laws include restrictions on destinations, end users and end use. For
additional information, see www.microsoft.com/exporting.
7. SUPPORT SERVICES. Because the Licensed Content is provided “as is”, we are not obligated to
provide support services for it.
8. TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you
fail to comply with the terms and conditions of this agreement. Upon termination of this agreement
for any reason, you will immediately stop all use of and delete and destroy all copies of the Licensed
Content in your possession or under your control.
9. LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible
for the contents of any third party sites, any links contained in third party sites, or any changes or
updates to third party sites. Microsoft is not responsible for webcasting or any other form of trans-
mission received from any third party sites. Microsoft is providing these links to third party sites to
you only as a convenience, and the inclusion of any link does not imply an endorsement by Microsoft
of the third party site.
10. ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.
11. APPLICABLE LAW.
1. United States. If you acquired the Licensed Content in the United States, Washington state law
governs the interpretation of this agreement and applies to claims for breach of it, regardless of
conflict of laws principles. The laws of the state where you live govern all other claims, including
claims under state consumer protection laws, unfair competition laws, and in tort.
2. Outside the United States. If you acquired the Licensed Content in any other country, the laws of
that country apply.
12. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the
laws of your country. You may also have rights with respect to the party from whom you acquired the
Licensed Content. This agreement does not change your rights under the laws of your country if the
laws of your country do not permit it to do so.
13. DISCLAIMER OF WARRANTY. THE LICENSED CONTENT IS LICENSED "AS-IS" AND "AS AVAILA-
BLE." YOU BEAR THE RISK OF USING IT. MICROSOFT AND ITS RESPECTIVE AFFILIATES GIVES NO
EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. YOU MAY HAVE ADDITIONAL CON-
SUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT CANNOT CHANGE. TO
THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT AND ITS RESPECTIVE AFFILI-
ATES EXCLUDES ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICU-
LAR PURPOSE AND NON-INFRINGEMENT.
14. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO
US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST
PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
X EULA
■■ Module 0 Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Start here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
■■ Module 1 Planning for DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Transformation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Project selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Team structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Migrating to DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
■■ Module 2 Getting Started with Source Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
What is source control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Benefits of source control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Types of source control systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Introduction to Azure Repos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Introduction to GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Migrating from Team Foundation Version Control (TFVC) to Git in Azure Repos . . . . . . . . . . . . . . . . . . 63
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
■■ Module 3 Managing technical debt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Identifying technical debt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Knowledge sharing within teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Modernizing development environments with GitHub Codespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
■■ Module 4 Working with Git for Enterprise DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
How to structure your Git Repo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Git branching workflows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Collaborating with pull requests in Azure Repos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Why care about Git hooks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Fostering inner source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Managing Git Repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
■■ Module 5 Configuring Azure Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
The concept of pipelines in DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Azure Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Evaluate use of Microsoft-hosted versus self-hosted agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Agent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Pipelines and concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Azure DevOps and open-source projects (public projects) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Azure Pipelines YAML versus Visual Designer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
■■ Module 6 Implementing Continuous Integration with Azure Pipelines . . . . . . . . . . . . . . . . . . . . 151
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Continuous integration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Implementing a build strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Integration with Azure Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Integrating external source control with Azure Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Set up self-hosted agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
■■ Module 7 Managing Application Configuration and Secrets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Introduction to security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Implement a secure development process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Rethinking application configuration data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Manage secrets, tokens, and certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Integrating with identity management systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Implementing application configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
■■ Module 8 Implementing Continuous Integration with GitHub Actions . . . . . . . . . . . . . . . . . . . . 215
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
GitHub Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Continuous integration with GitHub Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Securing secrets for GitHub Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
■■ Module 9 Designing and Implementing a Dependency Management Strategy . . . . . . . . . . . . 233
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Packaging dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Package management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Migrating and consolidating artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Package security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Implement a versioning strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Module Review and Takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
■■ Module 10 Designing a Release Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Introduction to continuous delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Release strategy recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Building a high-quality release pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Choosing the right release management tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
■■ Module 11 Implementing Continuous Deployment using Azure Pipelines . . . . . . . . . . . . . . . . . 315
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Create a release pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Provision and configure environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Manage and modularize tasks and templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Configure automated integration and functional test automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Automate inspection of health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
■■ Module 12 Implementing an Appropriate Deployment Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Introduction to deployment patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Implement blue-green deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Feature toggles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Canary releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Dark launching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
A/B testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Progressive exposure deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
■■ Module 13 Managing Infrastructure and Configuration using Azure Tools . . . . . . . . . . . . . . . . 399
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Infrastructure as code and configuration management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Create Azure resources using ARM templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Create Azure resources by using Azure CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Azure Automation with DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Desired State Configuration (DSC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
■■ Module 14 Using Third Party Infrastructure as Code Tools Available with Azure . . . . . . . . . . . 459
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Chef . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Puppet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Ansible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Terraform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
■■ Module 15 Managing Containers using Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Implementing a container build strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
Implementing Docker multi-stage builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
■■ Module 16 Creating and Managing Kubernetes Service Infrastructure . . . . . . . . . . . . . . . . . . . . 535
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Azure Kubernetes Service (AKS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
Kubernetes tooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Integrating AKS with Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
■■ Module 17 Implementing Feedback for Development Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Implement tools to track system usage, feature usage, and flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
Implement routing for mobile application crash report data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
Develop monitoring and status dashboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
Integrate and configure ticketing systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
Module Review and Takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
■■ Module 18 Implementing System Feedback Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
Site reliability engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
Design practices to measure end-user satisfaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
Design processes to capture and analyze user feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
Design processes to automate application analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
Managing alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
Blameless retrospectives and a just culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
■■ Module 19 Implementing Security in DevOps Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
Security in the Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
Azure Security Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
■■ Module 20 Validating Code Bases for Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
Module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
Open-source software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
Managing security and compliance policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
Integrating license and vulnerability scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
Module 0 Welcome
Start here
Microsoft DevOps curriculum
Welcome to the Designing and Implementing Microsoft DevOps Solutions course. This course will
help you prepare for the AZ-400, Designing and Implementing Microsoft DevOps Solutions1 certifica-
tion exam.
The DevOps certification exam is for DevOps professionals who combine people, process, and technolo-
gies to continuously deliver valuable products and services that meet end user needs and business
objectives. DevOps professionals streamline delivery by optimizing practices, improving communications
and collaboration, and creating automation. They design and implement strategies for application code
and infrastructure that allow for continuous integration, continuous testing, continuous delivery, and
continuous monitoring and feedback.
Exam candidates must be proficient with Agile practices. They must be familiar with both Azure adminis-
tration and Azure development and experts in at least one of these areas. DevOps professionals must be
able to design and implement DevOps practices for version control, compliance, infrastructure as code,
configuration management, build, release, and testing by using Azure technologies.
There are seven exam study areas.
1 https://docs.microsoft.com/en-us/learn/certifications/exams/AZ-400
2
2 https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/
3 https://docs.microsoft.com/en-us/learn/certifications/courses/az-900t01
4 https://docs.microsoft.com/en-us/learn/paths/az-104-administrator-prerequisites/
5 https://docs.microsoft.com/en-us/learn/certifications/courses/az-104t00
6 https://docs.microsoft.com/en-us/learn/certifications/courses/az-010t00
7 https://docs.microsoft.com/en-us/learn/paths/create-serverless-applications/
8 https://docs.microsoft.com/en-us/learn/certifications/courses/az-204t00
9 https://docs.microsoft.com/en-us/learn/certifications/courses/az-020t00
3
Course syllabus
This course includes content that will help you prepare for the Microsoft DevOps Solution certification
exam. Other content is included to ensure you have a complete picture of DevOps. The course content
includes a mix of graphics, reference links, module review questions, and optional hands-on labs.
Module 1 – Planning for DevOps
●● Lesson 1: Module overview
●● Lesson 2: Transformation planning
●● Lesson 3: Project selection
●● Lesson 4: Team structures
●● Lesson 5: Migrating to DevOps
●● Lesson 6: Lab 01: Agile planning and portfolio management with Azure Boards
●● Lesson 7: Module review and takeaways
Module 2 – Getting Started with Source Control
●● Lesson 1: Module overview
●● Lesson 2: What is source control?
●● Lesson 3: Benefits of source control
●● Lesson 4: Types of source control systems
●● Lesson 5: Introduction to Azure Repos
●● Lesson 6: Introduction to GitHub
●● Lesson 7: Migrating from Team Foundation Version Control (TFVC) to Git in Azure Repos
●● Lesson 8: Lab 02: Version controlling with Git in Azure Repos
●● Lesson 9: Module review and takeaways
Module 3 – Managing Technical Debt
●● Lesson 1: Module overview
●● Lesson 2: Identifying technical debt
●● Lesson 3: Knowledge sharing within teams
●● Lesson 4: Modernizing development environments with Codespaces
●● Lesson 5: Lab 03: Sharing team knowledge using Azure Project Wikis
●● Lesson 6: Module review and takeaways
Module 4 – Working with Git for Enterprise DevOps
●● Lesson 1: Module overview
●● Lesson 2: How to structure your Git Repo
●● Lesson 3: Git branching workflows
●● Lesson 4: Collaborating with pull requests in Azure Repos
●● Lesson 5: Why care about Git Hooks?
●● Lesson 6: Fostering inner source
5
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions10
10 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
Module 1 Planning for DevOps
Module overview
Module overview
Plan before you act. This module will help you understand what DevOps is and how to plan for a DevOps
transformation journey.
Learning objectives
After completing this module, students will be able to:
●● Plan for the transformation with shared goals and timelines
●● Select a project and identify project metrics and Key Performance Indicators (KPI's)
●● Create a team and agile organizational structure
●● Design a tool integration strategy
●● Design a license management strategy (e.g. Azure DevOps and GitHub users)
●● Design a strategy for end-to-end traceability from work items to working software
●● Design an authentication and access strategy
●● Design a strategy for integrating on-premises and cloud resources
12
Transformation planning
What is DevOps?
According to Donovan Brown, “DevOps is the union of people, process, and products to enable continu-
ous delivery of value to our end users.” What is DevOps?1
The contraction of “Dev” and “Ops” refers to replacing siloed Development and Operations to create
multidisciplinary teams that now work together with shared and efficient practices and tools. Essential
DevOps practices include agile planning, continuous integration, continuous delivery, and monitoring of
applications. DevOps is a continuous journey.
1 https://www.donovanbrown.com/post/what-is-devops
13
Become data-informed
Hopefully, you use data to inform what to do in your next cycle. Many experience reports tell us that
roughly one-third of the deployments will have negative business results, roughly one third will have
positive results, and one third will make no difference. Ideally, you would like to fail fast on those that
don’t advance the business and double down on those that support the business. Sometimes this is
called pivot or persevere.
2. Continuous Delivery of software solutions to production and testing environments helps organiza-
tions quickly fix bugs and respond to ever-changing business requirements.
3. Version Control, usually with a Git-based Repository, enables teams located anywhere in the world to
communicate effectively during daily development activities as well as to integrate with software
development tools for monitoring activities such as deployments.
18
4. Agile planning and lean project management techniques are used to plan and isolate work into
sprints, manage team capacity, and help teams quickly adapt to changing business needs. A DevOps
Definition of Done is working software collecting telemetry against the intended business objectives.
5. Monitoring and Logging of running applications including production environments for application
health as well as customer usage, helps organizations form a hypothesis and quickly validate or
disprove strategies. Rich data is captured and stored in various logging formats.
19
6. Public and Hybrid Clouds have made the impossible easy. The cloud has removed traditional bottle-
necks and helped commoditize infrastructure. Whether you use Infrastructure as a Service (IaaS) to lift
and shift your existing apps, or Platform as a Service (PaaS) to gain unprecedented productivity, the
cloud gives you a datacenter without limits.
7. Infrastructure as Code (IaC) is a practice which enables the automation and validation of creation and
teardown of environments to help with delivering secure and stable application hosting platforms.
20
8. Microservices architecture is leveraged to isolate business use cases into small reusable services that
communicate via interface contracts. This architecture enables scalability and efficiency.
9. Containers are the next evolution in virtualization. They are much more lightweight than virtual
machines, allow much faster hydration, and can be easily configured from files.
21
2 https://docs.microsoft.com/en-us/azure/devops/learn/what-is-devops
23
For DevOps transformations, the separate team should be made up of staff members, all of whom are
focused on and measured on the transformation outcomes, and not involved in the operational day-to-
day work. The team might also include some external experts that can fill the knowledge gaps and help
to advise on processes that are new to the existing staff members. Ideally the staff members who were
recruited for this should already be well-regarded throughout the organization and as a group they
should offer a broad knowledge base so they can think outside the box.
Project selection
Greenfield and brownfield projects defined
The terms greenfield and brownfield have their origins in residential and industrial building projects. A
greenfield project is one done on a green field, that is, undeveloped land. A brownfield project is one that
was done on land that has been previously used for other purposes. Because of the land use that has pre-
viously occurred there could be challenges with reusing the land. Some of these would be obvious, like
existing buildings, but could aslo be less obvious, like polluted soil.
Greenfield projects
A greenfield project will always appear to be an easier starting point because a blank slate offers the
chance to implement everything the way that you want. You might also have a better chance of avoiding
existing business processes that do not align with your project plans.
For example, if current IT policies do not allow the use of cloud-based infrastructure, this might be
allowed for entirely new applications that are designed for that environment from scratch. As another
example, you might be able to sidestep internal political issues that are well-entrenched.
Brownfield projects
While brownfield projects come with the baggage of existing code bases, existing teams, and often a
great amount of technical debt, they can still be ideal projects for DevOps transformations.
When your teams are spending large percentages of their time just maintaining existing brownfield
applications, you have limited ability to work on new code. It's important to find a way to reduce that
time, and to make software releases less risky. A DevOps transformation can provide that.
The existing team members will often have been worn down by the limitations of how they have been
working in the past and be keen to try to experiment with new ideas. These are often systems that the
organizations will be currently depending upon, so it might also be easier to gain stronger management
buy in for these projects because of the size of the potential benefits that could be derived. Management
25
might also have a stronger sense of urgency to point brownfield projects in an appropriate direction,
when compared to greenfield projects that don't currently exist.
Systems of record
Systems that are providing the truth about data elements are often called systems of record. These
systems have historically
evolved slowly and carefully. For example, it is crucial that a banking system accurately reflect your bank
balance.
Systems of record emphasize accuracy and security.
Systems of engagement
Many organizations have other systems that are more exploratory. These often use experimentation to
solve new problems. Systems of engagement are ones that are modified regularly. Making changes
quickly is prioritized over ensuring that the changes are right.
There is a perception that DevOps suits systems of engagement more than systems of record. But the
lessons from high performing companies show that this just isn't the case. Sometimes, the criticality of
doing things right with a system of record is an excuse for not implementing DevOps practices. Worse,
given the way that applications are interconnected, an issue in a system of engagement might end up
causing a problem in a system of record anyway. Both types of systems are important. While it might be
easier to start with a system of engagement when first starting a DevOps Transformation, DevOps practic-
es apply to both types of systems. The most significant outcomes often come from transforming systems
of record.
When choosing Canaries, it is important to find staff members who are both keen to see new features as
soon as they are available, and who are also highly tolerant of issues that might arise.
Early adopters have similar characteristics to Canaries but often have work requirements that make them
less tolerant to issues and interruptions to their ability to work.
While development and IT operations staff might generally be expected to be less conservative than
users, their attitudes will also range from very conservative, to early adopters, and to those happy to work
at the innovative edge.
Faster outcomes
●● Deployment Frequency. Increasing the frequency of deployments is often a critical driver in DevOps
projects.
●● Deployment Speed. As well as increasing how often deployments happen, it's important to decrease
the time that they take.
●● Deployment Size. How many features, stories, and bug fixes are being deployed each time?
●● Lead Time. How long does it take from the creation of a work item, until it is completed?
Efficiency
●● Server to Admin Ratio. Are the projects reducing the number of administrators required for a given
number of servers?
27
●● Staff Member to Customers Ratio. Is it possible for less staff members to serve a given number of
customers?
●● Application Usage. How busy is the application?
●● Application Performance. Is the application performance improving or dropping? (Based upon
application metrics)?
Culture
●● Employee morale. Are employees happy with the transformation and where the organization is head-
ing? Are they still willing to respond to further changes? This can be very difficult to measure, but is
often done by periodic, anonymous employee surveys.
●● Retention rates. Is the organization losing staff?
✔️ Note: It is important to choose metrics that focus on specific business outcomes and that achieve a
return on investment and increased business value.
28
Team structures
Agile development practices defined
Waterfall
Traditional software development practices involve determining a problem to be solved, analyzing the
requirements, building and testing the required code, and then delivering the outcome to users. This is
often referred to as a waterfall approach. The waterfall model follows a sequential order; a project
development team only moves to the next phase of development or testing if the previous step is
completed successfully. It's what an engineer would do when building a bridge or a building. So, it might
seem appropriate for software projects as well. However, the waterfall methodology has some drawbacks.
One relates to the customer requirements. Even if a customer's requirements are defined very accurately
at the start of a project because these projects often take a long time, by delivery, the outcome may no
longer match what the customer needs. There's a real challenge with the gathering of customer require-
ments in the first place. Even if you built exactly what the customer asked for, it'll often be different to
what they need. Customers often don't know what they want until they see it or are unable to articulate
what they need.
Agile
By comparison, Agile methodology emphasizes constantly adaptive planning and early delivery with
continual improvement. Rather than restricting development to rigid specifications, it encourages rapid
and flexible responses to change as they occur. In 2001, a group of highly regarded developers published
a manifesto for Agile software development. They said that development needs to favor individuals and
interactions over processes and tools, working software over comprehensive documentation, customer
collaboration over contract negotiation, and responding to changes over following a plan. Agile software
development methods are based on releases and iterations. One release might consist of several itera-
tions. Each iteration is like a very small independent project and after being estimated and prioritized,
features, bug fixes and enhancements and refactoring width is assigned to a release, and then assigned
again to a specific iteration within the release, generally on a priority basis. At the end of each iteration,
they should be tested working code. In each iteration, the team must focus on the outcomes of the
previous iteration and learn from that. An advantage of having teams focused on shorter term outcomes
is that teams are also less likely to waste time over engineering features or allowing an unnecessary
scope creep to occur. Agile software development helps teams keep focused on business outcomes.
3 https://www.agilealliance.org/
4 https://www.agilealliance.org/agile101/the-agile-manifesto/
5 https://www.agilealliance.org/agile101/12-principles-behind-the-agile-manifesto/
30
By comparison, vertical team structures span the architecture and are aligned with skill sets or disciplines:
Vertical teams have been shown to provide stronger outcomes in Agile projects. It's important that each
product has a clearly identified owner.
Another key benefit of the vertical team structure is that scaling can occur by adding teams. In this
example, feature teams have been created rather than just project teams:
adopt new methods. Agile coaches typically work with more than one team and try to remove any
roadblocks from inside or outside the organization. This work requires a variety of skills, including
coaching, mentoring, teaching, and facilitating. Agile coaches must be both trainers and consultants.
There is more than one type of agile coach. Some coaches are technical experts who aim to show staff
members how to apply specific concepts, like test-driven development and the implementation of
continuous integration or deployment. These coaches might perform peer programming sessions with
staff members. Other coaches are focused on agile processes, determining requirements, and managing
work activities. They might assist in how to run effective stand-up and review meetings. Some coaches
may themselves act as scrum masters. They might mentor staff in how to fill these roles.
Over time, though, it's important for team members to develop an ability to mentor each other. Teams
should aim to be self-organizing. Team embers are often expected to learn as they work and to acquire
skills from each other. To make this effective, though, the work itself needs to be done in a collaborative
way, not by individuals working by themselves.
Cultural changes
Over recent decades, offices have often become open spaces with few walls. At the time of writing, a big
shift to working from home has started, initiated as a response to the pandemic. Both situations can limit
collaboration and ambient noise and distractions often also reduce productivity. Staff tend to work better
when they have quiet comfortable working environments. Defined meeting times and locations lets staff
choose when they want to interact with others.
Asynchronous communication should be encouraged but there should not be an expectation that all
communications will be responded to urgently. Staff should be able to focus on their primary tasks
without feeling like they are being left out of important decisions.
All meetings should have strict timeframes, and more importantly, have an agenda. If there is no agenda,
there should be no meeting.
As it is becoming harder to find the required staff, great teams will be just as comfortable with remote or
work-from-home workers as they are for those in the office. To make this successful though, collabora-
tion via communication should become part of the organization's DNA.
Staff should be encouraged to communicate openly and frankly. Learning to deal with conflict is impor-
tant for any team, as there will be disagreements at some point. Mediation skills training would be useful.
Cross-functional teams
Members of a team need to have good collaboration; it's also important to have great collaboration with
wider teams, to bring people with different functional expertise together to work toward a common goal.
Often, these will be people from different departments within an organization.
Faster and better innovation can occur in these cross-functional teams. People from different areas of the
organization will have different views of the same problem, and they are more likely to come up with
alternate solutions to problems or challenges. Existing entrenched ideas are more likely to be challenged.
32
Cross-functional teams can also minimize turf-wars within organizations. The more widely that a project
appears to have ownership, the easier it will be for it to be widely accepted. Bringing cross-functional
teams together also helps to spread knowledge across an organization.
Recognizing and rewarding collective behavior across cross-functional teams can also help to increase
team cohesion.
Collaboration tooling
The following collaboration tools are commonly used by agile teams:
Teams (Microsoft)6 A group chat application from Microsoft. It provides a combined location with
workplace chat, meetings, notes, and storage of file attachments. A user can be a member of many
teams.
Slack7 A commonly used tool for collaboration in Agile and DevOps teams. From a single interface, it
provides a series of separate communication channels. These can be organized by project, team, or topic.
Conversations are retained and are searchable. It is very easy to add both internal and external team
members. Slack directly integrates with many third party tools like GitHub8 for source code and Drop-
Box9 for document and file storage.
Jira10 A commonly used tool that allows for planning, tracking, releasing, and reporting.
Asana11 A common tool that's designed to keep details of team plans, progress, and discussions in a
single place. It has strong capabilities around timelines and boards.
Glip12 An offering from Ring Central that provides chat, video, and task management.
Other common tools that include collaboration offerings include ProofHub, RedBooth, Trello, DaPulse,
and many others.
Physical tools
Note that not all tools need to be digital tools. Many teams make extensive use of white boards for
collaborating on ideas, index cards for recording stories, and sticky notes for moving tasks around. Even
when digital tools are available, it might be more convenient to use these physical tools during stand up
and other meetings.
Collaboration tools
These tools were discussed in the previous topic.
6 https://products.office.com/en-us/microsoft-teams/group-chat-software
7 https://slack.com/
8 https://github.com/
9 https://dropbox.com/
10 https://www.atlassian.com/software/jira
11 https://asana.com/
12 https://glip.com/
33
As well as a complete CI/CD system, Azure DevOps includes flexible Kanban boards, traceability through
Backlogs, customizable dashboards, built-in scrum boards and integrates directly with code repositories.
Code changes can be linked directly to tasks or bugs.
Apart from Azure DevOps, other common tools include GitHub, Jira Agile, Trello, Active Collab, Agilo for
Scrum, SpiraTeam, Icescrum, SprintGround, Gravity, Taiga, VersionOne, Agilean, Wrike, Axosoft, Assembla,
PlanBox, Asana, Binfire, Proggio, VivifyScrum, and many others.
Migrating to DevOps
What can Azure DevOps do?
Azure DevOps is a Software as a service (SaaS) platform from Microsoft that provides an end-to-end
DevOps toolchain for developing and deploying software. It also integrates with most leading tools on
the market and is a great option for orchestrating a DevOps toolchain.
●● Actions: Allows for the creation of automation workflows. These workflows can include environment
variables and customized scripts.
●● Artifacts: The majority of the world's open-source projects are already contained in GitHub reposito-
ries. GitHub makes it easy to integrate with this code, and with other third-party offerings.
●● Security: Provides detailed code scanning and review features, including automated code review
assignment.
Security groups
Azure DevOps is pre-configured with default security groups. Default permissions are assigned to the
default security groups. But you can also configure access at the organization level, the collection level,
and at the project or object level.
In the organization settings in Azure DevOps, you can configure app access policies. Based on your
security policies, you might allow alternate authentication methods, allow third party applications to
access via OAuth, or even allow anonymous access to some projects. For even tighter control, you can set
conditional access to Azure DevOps. This offers simple ways to help secure resources when using Azure
Active Directory for authentication.
Multifactor authentication
Conditional access policies such as multifactor authentication can help to minimize the risk of compro-
mised credentials. As part of a conditional access policy, you might require security group membership, a
location or network identity, a specific operating system, a managed device, or other criteria.
36
Jira
Jira is a commonly used work management tool.
In the Visual Studio Marketplace, Solidify14 offers a tool for Jira to Azure DevOps migration. It does the
migration in two phases. Jira issues are exported to files and then the files are imported to Azure DevOps.
If you decide to try to write the migration code yourself, the following blog post provides sample code
that might help you to get started:
Migrate your project from Jira to Azure DevOps15
Other applications
Third party organizations do offer commercial tooling to assist with migrating other work management
tools like Aha, BugZilla, ClearQuest, and others to Azure DevOps.
13 https://marketplace.visualstudio.com/items?itemName=ms-vsts.services-trello
14 https://marketplace.visualstudio.com/items?itemName=solidify-labs.jira-devops-migration
15 http://www.azurefieldnotes.com/2018/10/01/migrate-your-project-from-jira-to-azure-devops/
16 https://docs.microsoft.com/en-us/azure/devops/test/load-test/get-started-jmeter-test?view=vsts
17 https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-PesterRunner-Task
18 https://marketplace.visualstudio.com/items?itemName=AjeetChouksey.soapui
19 https://marketplace.visualstudio.com/search?term=test%20management&target=AzureDevOps&category=All%20
categories&sortBy=Relevance
37
20 https://azure.microsoft.com/en-us/pricing/details/devops/azure-devops-services/
21 https://github.com/pricing/
38
Lab
Lab 01: Agile planning and portfolio manage-
ment with Azure Boards
Lab overview
In this lab, you will learn about the agile planning and portfolio management tools and processes
provided by Azure Boards and how they can help you quickly plan, manage, and track work across your
entire team. You will explore the product backlog, sprint backlog, and task boards which can be used to
track the flow of work during the course of an iteration. We will also take a look at how the tools have
been enhanced in this release to scale for larger teams and organizations.
Objectives
After you complete this lab, you will be able to:
●● Manage teams, areas, and iterations
●● Manage work items
●● Manage sprints and capacity
●● Customize Kanban boards
●● Define dashboards
●● Customize team process
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions22
22 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
39
Review Question 2
An Agile tool that is used to manage and visualize work by showing tasks moving from left to right across
columns representing stages. What is this tool commonly called?
Backlog
Kanban Board
Review Question 3
In which of the following would you find large amounts of technical debt?
Greenfield project
Brownfield project
Review Question 4
As a project metric, what is Lead Time measuring?
Review Question 5
What is a cross-functional team?
40
Answers
Review Question 1
Which of the following would a system that manages inventory in a warehouse be considered?
■■ System of Record
System of Engagement
Explanation
Systems that are providing the truth about data elements are often called Systems of Record.
Review Question 2
An Agile tool that is used to manage and visualize work by showing tasks moving from left to right across
columns representing stages. What is this tool commonly called?
Backlog
■■ Kanban Board
Explanation
A Kanban Board lets you visualize the flow of work and constrain the amount of work in progress. Your
Kanban board turns your backlog into an interactive signboard, providing a visual flow of work.
Review Question 3
In which of the following would you find large amounts of technical debt?
Greenfield project
■■ Brownfield project
Explanation
A Brownfield Project comes with the baggage of existing code bases, existing teams, and often a great
amount of technical debt, they can still be ideal projects for DevOps transformations.
As a project metric, what is Lead Time measuring?
Lead time measures the total time elapsed from the creation of work items to their completion.
A team that brings people with different functional expertise, and often from different departments, together
to work toward a common goal.
Module 2 Getting Started with Source Control
Module overview
Module overview
Source control is fundamental to DevOps. In this modern day you'll hardly find any resistance to the use
of source control; however, there is some level of ambiguity around the differences in the two different
types of source control systems and which type is better suited where.
Learning objectives
After completing this module, students will be able to:
●● Describe the benefits of using Source Control
●● Describe Azure Repos and GitHub
●● Migrate from TFVC to Git
42
It’s also helpful for non-developers in an organization to understand the fundamentals of the discipline
as it is so deeply rooted in the daily life of software engineers. This is particularly important if those
individuals are making decisions about which version control tools and platforms to use.
Version control is important for all software development projects and is particularly vital at large busi-
nesses and enterprises. Enterprises have many stakeholders, distributed teams, strict processes, and
workflows, siloed organizations and hierarchical organization. All those characteristics represent coordi-
nation and integration challenges when it comes to merging and deploying code. Even more so in
1 https://puppet.com/resources/report/state-of-devops-report
43
companies within highly regulated industries such as in banking and healthcare, with many rules and
regulations, need a practical way to ensure that all standards are being met appropriately and risk is
mitigated.
Without version control, you’re tempted to keep multiple copies of code on your computer. This is
dangerous-it’s easy to change or delete a file in the wrong copy of code, potentially losing work. Version
control systems solve this problem by managing all versions of your code but presenting you with a
single version at a time.
Tools and processes alone are not enough to accomplish the above and hence the adoption of Agile,
Continuous Integration and DevOps. Believe it or not, all of these rely on a solid version control practice.
Version control is about keeping track of every change to software assets — tracking and managing the
who, what and when. Version control is a first step needed to assure quality at the source, ensuring flow
and pull value and focusing on process. All of these create value not just for the software teams, but
ultimately for the customer.
Version control is a solution for managing and saving changes made to any manually created assets. It
allows you to go back in time and easily roll back to previously working versions if changes are made to
source code. Version control tools allow you to see who made changes, when and what exactly was
changed. Version control also makes experimenting easy and most importantly makes collaboration
possible. Without version control, collaborating over source code would be a painful operation.
There are several perspectives on version control. For developers though, this is a daily enabler for work
and collaboration to happen. It’s part of the daily job, one of the most-used tools. For management, the
key value of version control is in IP security, risk management and time-to-market speed through
Continuous Delivery where version control is a fundamental enabler.
44
Whether you are writing code professionally or personally, you should always version your code using a
source control management system. Some of the advantages of using source control are,
●● Create workflows. Version control workflows prevent the chaos of everyone using their own develop-
ment process with different and incompatible tools. Version control systems provide process enforce-
ment and permissions, so everyone stays on the same page.
●● Work with versions. Every version has a description in the form of a comment. These descriptions
help you follow changes in your code by version instead of by individual file changes. Code stored in
versions can be viewed and restored from version control at any time as needed. This makes it easy to
base new work off any version of code.
●● Collaboration. Version control synchronizes versions and makes sure that your changes doesn't
conflict with other changes from your team. Your team relies on version control to help resolve and
prevent conflicts, even when people make changes at the same time.
●● Maintains history of changes. Version control keeps a history of changes as your team saves new
versions of your code. This history can be reviewed to find out who, why, and when changes were
made. History gives you the confidence to experiment since you can roll back to a previous good
version at any time. History lets your base work from any version of code, such as to fix a bug in a
previous release.
●● Automate tasks. Version control automation features save your team time and generate consistent
results. Automate testing, code analysis and deployment when new versions are saved to version
control.
##Common software development values
●● Reusability – why do the same thing twice? Re-use of code is a common practice and makes building
on existing assets simpler.
●● Traceability – Audits are not just for fun, in many industries this is a legal matter. All activity must be
traced, and managers must be able to produce reports when needed. Traceability also makes debug-
45
ging and identifying root cause easier. Additionally, this will help with feature re-use as developers
can link requirements to implementation.
●● Manageability – Can team leaders define and enforce workflows, review rules, create quality gates
and enforce QA throughout the lifecycle?
●● Efficiency – are we using the right resources for the job and are we minimizing time and efforts? This
one is self-explanatory.
●● Collaboration – When teams work together quality tends to improve. We catch one another’s
mistakes and can build on each other’s strengths.
●● Learning – Organizations benefit when they invest in employees learning and growing. This is not
only important for on-boarding new team members, but for the lifelong learning of seasoned mem-
bers and the opportunity for workers to contribute not just to the bottom line but to the industry.
Centralized source control systems are based on the idea that there is a single “central” copy of your
project somewhere (probably on a server), and programmers will check in (or commit) their changes to
this central copy. “Committing” a change simply means recording the change in the central system. Other
programmers can then see this change. They can also pull down the change, and the version control tool
will automatically update the contents of any files that were changed. Most modern version control
systems deal with “changesets,” which simply are a group of changes (possibly to many files) that should
be treated as a cohesive whole. For example, a change to a C header file and the corresponding .c file
should always be kept together. Programmers no longer must keep many copies of files on their hard
drives manually, because the version control tool can talk to the central copy and retrieve any version
they need on the fly.
Some of the most common centralized version control systems you may have heard of or used are TFVC,
CVS, Subversion (or SVN) and Perforce.
Over time, so-called “distributed” source control or version control systems (DVCS for short) have become
the most important. The three most popular of these are Mercurial, Git and Bazaar.
47
These systems do not necessarily rely on a central server to store all the versions of a project’s files.
Instead, every developer “clones” a copy of a repository and has the full history of the project on their
own hard drive. This copy (or “clone”) has all the metadata of the original.
This method may sound wasteful, but in practice, it’s not a problem. Most programming projects consist
mostly of plain text files (and maybe a few images), and disk space is so cheap that storing many copies
of a file doesn’t create a noticeable dent in a hard drive’s free space. Modern systems also compress the
files to use even less space.
The act of getting new changes from a repository is usually called “pulling,” and the act of moving your
own changes to a repository is called “pushing”. In both cases, you move changesets (changes to files
groups as coherent wholes), not single-file diffs.
One common misconception about distributed version control systems is that there cannot be a central
project repository. This is simply not true. There is nothing stopping you from saying “this copy of the
project is the authoritative one.” This means that instead of a central repository being required by the
tools you use, it is now optional and purely a social issue.
version control operations such as history and compare without a network connection. Branches are
lightweight. When you need to switch contexts, you can create a private local branch. You can quickly
switch from one branch to another to pivot among different variations of your codebase. Later, you can
merge, publish, or dispose of the branch.
TFVC (centralized)
Team Foundation Version Control (TFVC) is a centralized version control system. Typically, team members
have only one version of each file on their dev machines. Historical data is maintained only on the server.
Branches are path-based and created on the server.
TFVC has two workflow models:
●● Server workspaces - Before making changes, team members publicly check out files. Most opera-
tions require developers to be connected to the server. This system facilitates locking workflows.
Other systems that work this way include Visual Source Safe, Perforce, and CVS. With server workspac-
es, you can scale up to very large codebases with millions of files per branch and large binary files.
●● Local workspaces - Each team member takes a copy of the latest version of the codebase with them
and works offline as needed. Developers check in their changes and resolve conflicts, as necessary.
Another system that works this way is Subversion.
Why Git?
Switching from a centralized version control system to Git changes the way your development team
creates software. And, if you’re a company that relies on its software for mission-critical applications,
altering your development workflow impacts your entire business. Developers would gain the following
benefits by moving to Git.
Community
In many circles, Git has come to be the expected version control system for new projects. If your team is
using Git, odds are you won’t have to train new hires on your workflow, because they’ll already be familiar
with distributed development.
In addition, Git is very popular among open-source projects. This means it’s easy to leverage 3rd-party
libraries and encourage others to fork your own open-source code.
49
Distributed development
In TFVC, each developer gets a working copy that points back to a single central repository. Git, however,
is a distributed version control system. Instead of a working copy, each developer gets their own local
repository, complete with a full history of commits.
Having a full local history makes Git fast, since it means you don’t need a network connection to create
commits, inspect previous versions of a file, or perform diffs between commits.
Distributed development also makes it easier to scale your engineering team. If someone breaks the
production branch in SVN, other developers can’t check in their changes until it’s fixed. With Git, this kind
of blocking doesn’t exist. Everybody can continue going about their business in their own local reposito-
ries.
And, like feature branches, distributed development creates a more reliable environment. Even if a
developer obliterates their own repository, they can simply clone someone else’s and start afresh.
Trunk-based development
One of the biggest advantages of Git is its branching capabilities. Unlike centralized version control
systems, Git branches are cheap and easy to merge.
50
Trunk-based development provide an isolated environment for every change to your codebase. When a
developer wants to start working on something—no matter how big or small—they create a new branch.
This ensures that the master branch always contains production-quality code.
Using trunk-based development is not only more reliable than directly editing production code, but it
also provides organizational benefits. They let you represent development work at the same granularity
as your agile backlog. For example, you might implement a policy where each work item is addressed in
its own feature branch.
Pull requests
Many source code management tools such as Azure Repos enhance core Git functionality with pull
requests. A pull request is a way to ask another developer to merge one of your branches into their
repository. This not only makes it easier for project leads to keep track of changes, but also lets develop-
ers initiate discussions around their work before integrating it with the rest of the codebase.
51
Since they’re essentially a comment thread attached to a feature branch, pull requests are extremely
versatile. When a developer gets stuck with a hard problem, they can open a pull request to ask for help
from the rest of the team. Alternatively, junior developers can be confident that they aren’t destroying the
entire project by treating pull requests as a formal code review.
As you might expect, Git works very well with continuous integration and continuous delivery environ-
ments. Git hooks allow you to run scripts when certain events occur inside of a repository, which lets you
automate deployment to your heart’s content. You can even build or deploy code from specific branches
to different servers.
For example, you might want to configure Git to deploy the most recent commit from the develop branch
to a test server whenever anyone merges a pull request into it. Combining this kind of build automation
with peer review means you have the highest possible confidence in your code as it moves from develop-
ment to staging to production.
Overwriting history
Git technically does allow you to overwrite history - but like any useful feature, if used incorrectly can
cause conflicts. If your teams are careful, they should never have to overwrite history. And if you're
synchronizing to Azure Repos, you can also add a security rule that prevents developers from overwriting
history by using the explicit “Force Push” permissions. Every source control system works best when the
53
developers using it understand how it works and which conventions work. While you can't overwrite
history with TFVC, you can still overwrite code and do other painful things.
Large files
Git works best with repos that are small and do not contain large files (or binaries). Every time you (or
your build machines) clone the repo, they get the entire repo with all its history from the first commit.
This is great for most situations, but can be frustrating if you have large files. Binary files are even worse
because Git just can't optimize how they are stored. That's why Git LFS2 was created. This lets you
separate large files out of your repos and still have all the benefits of versioning and comparing. Also, if
you're used to storing compiled binaries in your source repos, stop! Use Azure Artifacts3 or some other
package management tool to store binaries for which you have source code. However, teams that have
large files (like 3D models or other assets) can use Git LFS to keep the code repo slim and trim.
Learning curve
There is a learning curve. If you've never used source control before, you're probably better off when
learning Git. I've found that users of centralized source control (TFVC or SubVersion) battle initially to
make the mental shift especially around branches and synchronizing. Once developers understand how
Git branches work and get over the fact that they must commit and then push, they have all the basics
they need to be successful in Git.
Getting ready
In this tutorial, we'll learn how to initialize a Git repository locally, then we'll use the ASP.NET Core MVC
project template to create a new project and version it in the local Git repository. We'll then use Visual
Studio Code to interact with the Git repository to perform basic operations of commit, pull, and push.
You'll need to set up your working environment with the following:
●● .NET Core 3.1 SDK or later: Download .NET4
2 https://git-lfs.github.com/
3 https://azure.microsoft.com/en-us/services/devops/artifacts/
4 https://dotnet.microsoft.com/download
54
How to do it
1. Open the Command Prompt and create a new working folder:
mkdir myWebApp
cd myWebApp
3. Configure global settings for the name and email address to be used when committing in this Git
repository:
git config --global user.name "John Doe"
git config --global user.email "john.doe@contoso.com"
If you are working behind an enterprise proxy, you can make your Git repository proxy-aware by adding
the proxy details in the Git global configuration file. There are different variations of this command that
will allow you to set up an HTTP/HTTPS proxy (with username/password) and optionally bypass SSL
verification. Run the below command to configure a proxy in your global git config.
git config --global http.proxy
http://proxyUsername:proxyPassword@proxy.server.com:port
4. Create a new ASP.NET core application. The new command offers a collection of switches that can be
used for language, authentication, and framework selection. More details can be found on Microsoft
docs11.
5 https://code.visualstudio.com/Download
6 https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp
7 https://git-scm.com/downloads
8 https://gitforwindows.org/
9 https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens
10 https://marketplace.visualstudio.com/items?itemName=donjayamanne.githistory
11 https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-new
55
Launch Visual Studio Code in the context of the current working folder:
code .
5. When the project opens in Visual Studio Code, select Yes for the Required assets to build and
debug are missing from ‘myWebApp’. Add them? warning message. Select Restore for the There
are unresolved dependencies info message. Hit F5 to debug the application, then myWebApp will
load in the browser, as shown in the following screenshot:
If you prefer to use the commandline, you can run the following commands in the context of the git
repository to run the web application.
dotnet build
dotnet run
You'll notice the .vscode folder is added to your working folder. To avoid committing this folder into your
Git repository, you can include this in the .gitignore file. With the .vscode folder selected, hit F1 to launch
the command window in Visual Studio Code, type gitIgnore, and accept the option to include the
selected folder in the .gitIgnore file:
6. To stage and commit the newly created myWebApp project to your Git repository from Visual Studio
Code, navigate to the Git icon from the left panel. Add a commit comment and commit the changes
by clicking the checkmark icon. This will stage and commit the changes in one operation:
Open Program.cs, you'll notice Git lens decorates the classes and functions with the commit history and
brings this information inline to every line of code:
56
7. Now launch cmd in the context of the git repository and run git branch --list. This will show
you that currently only master branch exists in this repository. Now run the following command to
create a new branch called feature-devops-home-page
git branch feature-devops-home-page
git checkout feature-devops-home-page
git branch --list
With these commands, you have created a new branch, checked it out. The --list keyword shows you a
list of all branches in your repository. The green colour represents the branch that's currently checked
out.
8. Now navigate to the file ~\Views\Home\Index.cshtml and replace the contents with the text
below.
@{
ViewData["Title"] = "Home Page";
}
<div class="text-center">
<h1 class="display-4">Welcome</h1>
<p>Learn about <a href="https://azure.microsoft.com/en-gb/services/
devops/">Azure DevOps</a>.</p>
</div>
10. In the context of the git repository execute the following commands. These commands will stage the
changes in the branch and then commit them.
git status
git add .
git status
11. In order to merge the changes from the feature-devops-home-page into master, run the following
commands in the context of the git repository.
How it works
The easiest way to understand the outcome of the steps done earlier is to check the history of the
operation. Let's have a look at how to do this.
1. In git, committing changes to a repository is a two-step process. Upon running add . the changes
are staged but not committed. Finally running commit promotes the staged changes into into the
repository.
58
2. To see the history of changes in the master branch run the command git log -v
3. To investigate the actual changes in the commit, you can run the command git log -p
There is more
Git makes it easy to backout changes. Following our example, if you wanted to take out the changes
made to the welcome page, this can be done by hard resetting the master branch to a previous version
of the commit using the command below.
git reset --hard 5d2441f0be4f1e4ca1f8f83b56dee31251367adc
Running the above command would reset the branch to the project init change, if you run git log -v
you'll see that the changes done to the welcome page are completely removed from the repository.
59
12 https://docs.microsoft.com/en-us/azure/devops/repos/?view=azure-devops
60
Introduction to GitHub
What is GitHub?
GitHub is the largest open-source community in the world. GitHub is owned by Microsoft. GitHub is a
development platform inspired by the way you work. From open source to business, you can host and
review code, manage projects, and build software alongside 40 million developers. GitHub is a Git
repository hosting service, but it adds many of its own features. While Git is a command line tool, GitHub
provides a Web-based graphical interface. It also provides access control and several collaboration
features, such as a wikis and basic task management tools for every project. So what are the main
benefits of using GitHub? To be honest, nearly every open-source project uses GitHub to manage their
project. Using GitHub is free if your project is open source and includes a wiki and issue tracker that
makes it easy to include more in-depth documentation and get feedback about your project.
●● Seamless code review: Code review is the surest path to better code, and it’s fundamental to how
GitHub works. Built-in review tools make code review an essential part of your team’s process.
●● Propose changes: Better code starts with a Pull Request, a living conversation about changes
where you can talk through ideas, assign tasks, discuss details, and conduct reviews.
●● Request reviews: If you’re on the other side of a review, you can request reviews from your peers
to get the exact feedback you need.
●● See the difference: Reviews happen faster when you know exactly what’s changed. Diffs compare
versions of your source code side by side, highlighting the parts that are new, edited, or deleted.
●● Comment in context: Discussions happen in comment threads, right within your code. Bundle
comments into one review or reply to someone else’s inline to start a conversation.
●● Give clear feedback: Your teammates shouldn’t have to think too hard about what a thumbs up
emoji means. Specify whether your comments are required changes or just a few suggestions.
●● Protect branches: Only merge the highest quality code. You can configure repositories to require
status checks, reducing both human error and administrative overhead.
●● All your code and documentation in one place: There are hundreds of millions of private, public, and
open-source repositories hosted on GitHub. Every repository is equipped with tools to help you host,
version, and release code and documentation.
●● Code where you collaborate: Repositories keep code in one place and help your teams collaborate
with the tools they love, even if you work with large files using Git LFS. With unlimited private
repositories for individuals and teams, you can create or import as many projects as you’d like.
●● Documentation alongside your code: Host your documentation directly from your repositories
with GitHub Pages. Use Jekyll as a static site generator and publish your Pages from the /docs
folder on your master branch.
●● Manage your ideas: Coordinate early, stay aligned, and get more done with GitHub’s project manage-
ment tools.
●● See your project’s big picture: See everything happening in your project and choose where to
focus your team’s efforts with Projects, task boards that live right where they belong: close to your
code.
●● Track and assign tasks: Issues help you identify, assign, and keep track of tasks within your team.
You can open an Issue to track a bug, discuss an idea with an @mention, or start distributing work.
●● The human side of software: Building software is as much about managing teams and communities as
it is about code. Whether you’re on a team of two or two thousand, GitHub's got the support your
people need.
●● Manage and grow teams: Help people get organized with GitHub teams, level up access with
administrative roles, and fine tune your permissions with nested teams.
●● Keep conversations on topic: Moderation tools, like issue and pull request locking, help your team
stay focused on code. And if you maintain an open-source project, user blocking reduces noises
and ensures conversations are productive.
●● Set community guidelines: Set roles and expectations without starting from scratch. Customize
common codes of conduct to create the perfect one for your project. Then choose a pre-written
license right from your repository.
62
GitHub offers great learning resources for its platform. You can find everything from git introduction
training to deep dive on publishing static pages to GitHub and how to do DevOps on GitHub right here13.
Authenticating to GitHub
Azure Boards needs to be able to connect to GitHub. For GitHub in the cloud, when adding a GitHub
connection, the authentication options are:
●● Username/Password
●● Personal Access Token (PAT)
For a walkthrough on making the connection, see: Connect Azure Boards to GitHub15
For details on linking to workitems, see: Link GitHub commits, pull requests, and issues to work
items16
13 https://lab.github.com/
14 https://github.com/marketplace/azure-boards
15 https://docs.microsoft.com/en-us/azure/devops/boards/github/connect-to-github?view=azure-devops
16 https://docs.microsoft.com/en-us/azure/devops/boards/github/link-to-from-github?view=azure-devops
63
Import repository
Import repository also allows you to import a git repository, this is especially useful if you are looking to
move your git repositories from GitHub or any other public or private hosting spaces into Azure Repos.
64
There are some limitations here (that apply only when migrating source type TFVC): a single branch and
only 180 days of history. However, if you only care about one branch and you're already in Azure DevOps,
then this is a very simple but effective way to do the migration.
Using git-tfs
What if you need to migrate more than a single branch and retain branch relationships? Or you're going
to drag all the history with you? In that case, you're going to have to use git-tfs. This is an open-source
project that is built to synchronize Git and TFVC repos. But you can use it to do a once-off migration
using git tfs clone. git-tfs has the advantage that it can migrate multiple branches and will preserve the
relationships so that you can merge branches in Git after you migrate. Be warned that it can take a while
to do this conversion - especially for large repos or repos with long history. You can easily dry run the
migration locally, iron out any issues and then do it for real. There's lots of flexibility with this tool, so I
highly recommend it.
If you're on Subversion, then you can use git svn to import your Subversion repo in a similar manner to
using git-tfs.
17 https://github.com/git-tfs/git-tfs/blob/master/doc/commands/clone.md
65
Lab
Lab 02: Version controlling with Git in Azure Re-
pos
Lab overview
Azure DevOps supports two types of version control, Git and Team Foundation Version Control (TFVC).
Here is a quick overview of the two version control systems:
●● Team Foundation Version Control (TFVC): TFVC is a centralized version control system. Typically,
team members have only one version of each file on their dev machines. Historical data is maintained
only on the server. Branches are path-based and created on the server.
●● Git: Git is a distributed version control system. Git repositories can live locally (such as on a develop-
er's machine). Each developer has a copy of the source repository on their dev machine. Developers
can commit each set of changes on their dev machine and perform version control operations such as
history and compare without a network connection.
Git is the default version control provider for new projects. You should use Git for version control in your
projects unless you have a specific need for centralized version control features in TFVC.
In this lab, you will learn how to establish a local Git repository, which can easily be synchronized with a
centralized Git repository in Azure DevOps. In addition, you will learn about Git branching and merging
support. You will use Visual Studio Code, but the same processes apply for using any Git-compatible
client.
Objectives
After you complete this lab, you will be able to:
●● Clone an existing repository
●● Save work with commits
●● Review history of changes
●● Work with branches by using Visual Studio Code
Lab duration
●● Estimated time: 50 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions18
18 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
66
Review Question 2
What are the benefits of using distributed version dontrol? Mark all that apply.
permits monitoring of usage
complete offline support
cross platform support
allows exclusive file locking
portable history
Review Question 3
What are the benefits of using centralized version control? Mark all that apply.
easily scales for very large codebases
an open-source friendly code review model via pull requests
granular permission control
an enthusiastic growing user base
Review Question 4
What is source control?
67
Answers
Review Question 1
What are some of the benefits of source control? Mark all that apply.
■■ reusability
■■ collaboration
■■ manageability
■■ efficiency
accountability
■■ traceability
■■ automate tasks
Explanation
Source control is the practice of tracking and managing changes to code. Benefits include reusability, tracea-
bility, manageability, efficiency, collaboration, learning, create workflows, work with versions, collaboration,
maintains history of changes, and automate tasks.
Review Question 2
What are the benefits of using distributed version dontrol? Mark all that apply.
permits monitoring of usage
■■ complete offline support
■■ cross platform support
allows exclusive file locking
■■ portable history
Review Question 3
What are the benefits of using centralized version control? Mark all that apply.
■■ easily scales for very large codebases
an open-source friendly code review model via pull requests
■■ granular permission control
an enthusiastic growing user base
What is source control?
Module overview
Module overview
Technical Debt refers to the trade-off between decisions that make something easy in the short term and
the ones that make it maintainable in the long term. Companies constantly need to trade off between
solving the immediate, pressing problems and fixing long-term issues. Part of the solution to this prob-
lem is to create a quality-focused culture that encourages shared responsibility and ownership for both
code quality and security compliance. Azure DevOps has great tooling and ecosystem to improve code
quality and apply automated security checks.
Learning objectives
After completing this module, students will be able to:
●● Manage code quality including technical debt SonarCloud, and other tooling solutions
●● Build organizational knowledge on code quality
70
Reliability
Reliability measures the probability that a system will run without failure over a specific period of opera-
tion. It relates to the number of defects and availability of the software.
Number of defects can be measured by running a static analysis tool. Software availability can be meas-
ured using the mean time between failures (MTBF). Low defect counts are especially important for
developing a reliable codebase.
Maintainability
Maintainability measures how easily software can be maintained. It relates to the size, consistency,
structure, and complexity of the codebase. And ensuring maintainable source code relies on several
factors, such as testability and understandability.
You can’t use a single metric to ensure maintainability. Some metrics you may consider to improve
maintainability are the number of stylistic warnings and Halstead complexity measures. Both automation
and human reviewers are essential for developing maintainable codebases.
Testability
Testability measures how well the software supports testing efforts. It relies on how well you can control,
observe, isolate, and automate testing, among other factors.
Testability can be measured based on how many test cases you need to find potential faults in the
system. Size and complexity of the software can impact testability. So, applying methods at the code level
— such as cyclomatic complexity — can help you improve the testability of the component.
Portability
Portability measures how usable the same software is in different environments. It relates to platform
independency.
There isn’t a specific measure of portability. But there are several ways you can ensure portable code. It’s
important to regularly test code on different platforms, rather than waiting until the end of development.
It’s also a good idea to set your compiler warning levels as high as possible — and use at least two
compilers. Enforcing a coding standard also helps with portability.
71
Reusability
Reusability measures whether existing assets — such as code — can be used again. Assets are more
easily reused if they have characteristics such as modularity or loose coupling.
Reusability can be measured by the number of interdependencies. Running a static analyzer can help you
identify these interdependencies.
Complexity metrics
While there are various quality metrics, a few of the most important ones are listed below.
Complexity metrics can help in measuring quality. Cyclomatic complexity measures of the number of
linearly independent paths through a program’s source code. Another way to understand quality is
through calculating Halstead complexity measures. These measure:
●● Program vocabulary
●● Program length
●● Calculated program length
●● Volume
●● Difficulty
●● Effort
Code analysis tools can be used to check for considerations such as security, performance, interoperabili-
ty, language usage, globalization, and should be part of every developer’s toolbox and software build
process. Regularly running a static code analysis tool and reading its output is a great way to improve as
a developer because the things caught by the software rules can often teach you something.
✔️ Note: Over time, technical debt must be paid back. Otherwise, the team's ability to fix issues, and to
implement new features and enhancements will take longer and longer, and eventually become cost
prohibitive.
1 https://sonarcloud.io/about
74
If you drill into the issues, you can then see what the issues are, along with suggested remedies, and
estimates of the time required to apply a remedy.
NDepend
For .NET developers, a common tool is NDepend.
NDepend is a Visual Studio extension that assesses the amount of technical debt that a developer has
added during a recent development period, typically in the last hour. With this information, the developer
might be able to make the required corrections before ever committing the code. NDepend lets you
create code rules that are expressed as C# LINQ queries, but it has many built-in rules that detect a wide
range of code smells.
2 https://www.ndepend.com
3 https://marketplace.visualstudio.com/items?itemName=ndepend.ndependextension&targetId=2ec491f3-0a97-4e53-bfef-20bf80c7e1ea
4 https://marketplace.visualstudio.com/items?itemName=alanwales.resharper-code-analysis
76
It's important, up front, to agree that everyone is trying to achieve better code quality. Achieving code
quality can seem challenging because there is no one single best way to write any piece of code, at least
code with any complexity.
Everyone wants to do good work and to be proud of what they create. This means that it's easy for
developers to become over-protective of their code. The organizational culture must let all involved feel
that the code reviews are more like mentoring sessions where ideas about how to improve code are
shared, than interrogation sessions where the aim is to identify problems and blame the author.
The knowledge sharing that can occur in mentoring-style sessions can be one of the most important
outcomes of the code review process. It often happens best in small groups (perhaps even just two
people), rather than in large team meetings. And it's important to highlight what has been done well, not
just what needs to be improved.
Developers will often learn more in effective code review sessions than they will in any type of formal
training. Reviewing code should be an opportunity for all involved to learn, not just as a chore that must
be completed as part of a formal process.
It's easy to see two or more people working on a problem and think that one person could have com-
pleted the task by themselves. That's a superficial view of the longer-term outcomes. Team management
needs to understand that improving the code quality reduces the cost of code, not increases it. Team
leaders need to establish and foster an appropriate culture across their teams.
77
The wiki to share information with your team to understand and contribute to your project.
Wikis are stored in a repository. They need to be created (no wiki is automatically provisioned).
Prerequisites
You must have the permission Create Repository to publish code as wiki. While the Project Administra-
tors group has this permission by default, it can be assigned to others.
To add or edit wiki pages, you should be a member of the Contributors group.
All members of the team project (including stakeholders) can view the wiki.
78
Creation
The following article includes details on creating a wiki: Create a Wiki for your project5
Wiki contents
Azure DevOps Wikis are written in Markdown and can also include file attachments and videos.
Markdown
Markdown is a markup language. Plain text includes formatting syntax. It has become the defacto
standard for how project and software documentation is now written.
One key reason for this, is that because it is made up of plain text, it is much easier to merge in much the
same way that program code is merged. This allows documents to be managed with the same tools used
to create other code in a project.
Mermaid
Mermaid has become an important extension to Markdown because it allows diagrams to be included in
the documentation. This overcomes the previous difficulties in merging documentation that includes
diagrams represented as binary files.
5 https://docs.microsoft.com/en-us/azure/devops/project/wiki/wiki-create-repo?view=azure-devops&tabs=browser
6 https://docs.microsoft.com/en-us/azure/devops/project/wiki/publish-repo-to-wiki?view=azure-devops&tabs=browser
7 https://mermaid-js.github.io/mermaid/
79
Lab
Lab 03: Sharing team knowledge using Azure
project wikis
Lab overview
In this lab, you will create and configure wiki in an Azure DevOps, including managing markdown content
and creating a Mermaid diagram.
Objectives
After you complete this lab, you will be able to:
●● Create a wiki in an Azure Project
●● Add and edit markdown
●● Create a Mermaid diagram
Lab duration
●● Estimated time: 45 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions8
8 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
81
Review Question 2
What are code smells? Give an example of a code smell.
Review Question 3
You are using Azure Repos for your application source code repository. You want to create an audit of
open-source libraries that you have used. Which tool could you use?
Review Question 4
Name three attributes of high-quality code.
Review Question 5
You are using Azure Repos for your application source code repository. You want to perform code quality
checks. Which tool could you use?
82
Answers
What are Mermaid diagrams?
Code smells are characteristics in your code that could possibly be a problem. Code smells hint at deeper
problems in the design or implementation of the code. For example, code that works but contains many
literal values or duplicated code.
You are using Azure Repos for your application source code repository. You want to create an audit of
open-source libraries that you have used. Which tool could you use?
WhiteSource Bolt is used to analyze open-source library usage. OWASP ZAP is designed to run penetration
testing against applications. The two Sonar products are for code quality and code coverage analysis.
High quality code should have well-defined interfaces. It should be clear and easy to read so self-document-
ing is desirable, as is short (not long) method bodies.
You are using Azure Repos for your application source code repository. You want to perform code quality
checks. Which tool could you use?
SonarCloud is the cloud-based version of the original SonarQube and would be best for working with code
in Azure Repos.
Module 4 Working with Git for Enterprise Dev-
Ops
Module overview
Module overview
As a version control system, Git is easy to get started with but difficult to master. While there is no one
way to implement Git in the right way, there are lots of techniques that can help you scale the implemen-
tation of Git across the organization. Simple things like structuring your code into micro repos, selecting
a lean branching and merging model, and leveraging pull requests for code review can make your teams
more productive.
Learning objectives
After completing this module, students will be able to:
●● Explain how to structure Git Repos
●● Describe Git branching workflows
●● Leverage pull requests for collaboration and code reviews
●● Leverage Git hooks for automation
●● Use Git to foster inner source across the organization
84
Some teams will post change logs as blog posts; others will create a CHANGELOG.md file in a GitHub
repository.
gitchangelog
One common tool is gitchangelog1. This tool is based on Python.
1 https://pypi.org/project/gitchangelog/
2 https://github.com/github-changelog-generator/github-changelog-generator
86
Trunk-based development
Trunk-based developmentis a logical extension of Centralized Workflow. The core idea behind the
Feature Branch Workflow is that all feature development should take place in a dedicated branch instead
of the master branch. This encapsulation makes it easy for multiple developers to work on a particular
feature without disturbing the main codebase. It also means the master branch should never contain
broken code, which is a huge advantage for continuous integration environments.
GitFlow workflow
The GitFlow workflow was first published in a highly regarded 2010 blog post from Vincent Driessen at
nvie3. The Gitflow Workflow defines a strict branching model designed around the project release. This
workflow doesn’t add any new concepts or commands beyond what’s required for the Feature Branch
Workflow. Instead, it assigns very specific roles to different branches and defines how and when they
should interact.
Forking workflow
The Forking Workflow is fundamentally different than the other workflows discussed in this tutorial.
Instead of using a single server-side repository to act as the “central” codebase, it gives every developer a
server-side repository. This means that each contributor has not one, but two Git repositories: a private
local one and a public server-side one.
3 https://nvie.com/posts/a-successful-git-branching-model/
87
Create a branch
When you're working on a project, you're going to have a bunch of different features or ideas in progress
at any given time – some of which are ready to go, and others which are not. Branching exists to help you
manage this workflow.
When you create a branch in your project, you're creating an environment where you can try out new
ideas. Changes you make on a branch don't affect the master branch, so you're free to experiment and
commit changes, safe in the knowledge that your branch won't be merged until it's ready to be reviewed
by someone you're collaborating with.
Branching is a core concept in Git, and the entire branch flow is based upon it. There's only one rule:
anything in the master branch is always deployable.
88
Because of this, it's extremely important that your new branch is created off master when working on a
feature or a fix. Your branch name should be descriptive (e.g., refactor-authentication, user-content-
cache-key, make-retina-avatars), so that others can see what is being worked on.
Add commits
Once your branch has been created, it's time to start making changes. Whenever you add, edit, or delete
a file, you're making a commit, and adding them to your branch. This process of adding commits keeps
track of your progress as you work on a feature branch.
Commits also create a transparent history of your work that others can follow to understand what you've
done and why. Each commit has an associated commit message, which is a description explaining why a
particular change was made. Furthermore, each commit is considered a separate unit of change. This lets
you roll back changes if a bug is found, or if you decide to head in a different direction.
Commit messages are important, especially since Git tracks your changes and then displays them as
commits once they're pushed to the server. By writing clear commit messages, you can make it easier for
other people to follow along and provide feedback.
Pull Requests initiate discussion about your commits. Because they're tightly integrated with the underly-
ing Git repository, anyone can see exactly what changes would be merged if they accept your request.
You can open a Pull Request at any point during the development process: when you have little or no
code but want to share some screenshots or general ideas, when you're stuck and need help or advice, or
when you're ready for someone to review your work. By using @mention system in your Pull Request
89
message, you can ask for feedback from specific people or teams, whether they're down the hall or ten
time zones away.
Pull Requests are useful for contributing to projects and for managing changes to shared repositories. If
you're using a Fork & Pull Model, Pull Requests provide a way to notify project maintainers about the
changes you'd like them to consider. If you're using a Shared Repository Model, Pull Requests help start
code review and conversation about proposed changes before they're merged into the master branch.
Once a Pull Request has been opened, the person or team reviewing your changes may have questions or
comments. Perhaps the coding style doesn't match project guidelines, the change is missing unit tests, or
maybe everything looks great and props are in order. Pull Requests are designed to encourage and
capture this type of conversation.
You can also continue to push to your branch considering discussion and feedback about your commits.
If someone comments that you forgot to do something or if there is a bug in the code, you can fix it in
your branch and push up the change. Git will show your new commits and any additional feedback you
may receive in the unified Pull Request view.
Pull Request comments are written in Markdown, so you can embed images and emoji, use pre-format-
ted text blocks, and other lightweight formatting.
Deploy
With Git, you can deploy from a branch for final testing in an environment before merging to master.
90
Once your pull request has been reviewed and the branch passes your tests, you can deploy your chang-
es to verify them . If your branch causes issues, you can roll it back by deploying the existing master.
Merge
Now that your changes have been verified, it is time to merge your code into the master branch.
Once merged, Pull Requests preserve a record of the historical changes to your code. Because they're
searchable, they let anyone go back in time to understand why and how a decision was made.
By incorporating certain keywords into the text of your Pull Request, you can associate issues with code.
When your Pull Request is merged, the related issues can also close.
This workflow helps organize and track branches that are focused on business domain feature sets. Other
Git workflows like the Git Forking Workflow and the Gitflow Workflow are repo focused and can leverage
the Git Feature Branch Workflow to manage their branching models.
Getting ready
Let's cover the principles of what is being proposed:
●● The master branch:
●● The master branch is the only way to release anything to production.
●● The master branch should always be in a ready-to-release state.
●● Protect the master branch with branch policies.
●● Any changes to the master branch flow through pull requests only.
●● Tag all releases in the master branch with Git tags.
91
●● Pull requests:
How to do it
1. After you've cloned the master branch into a local repository, create a new1. feature branch, myFea-
ture-1:
92
2. Run the Git branch command to see all the branches, the branch showing up with asterisk is the
currently-checked-out branch:
myWebApp> git branch * feature/myFeature-1 maste
4. Stage your changes and commit locally, then publish your branch to remote:
myWebApp> git status
On branch feature/myFeature-1 Changes not staged for commit: (use "git add
<file>..." to update what will be committed) (use "git checkout -- <file>..."
to discard changes in working directory) modified: Program.cs
5. Create a new pull request (using the Azure DevOps CLI) to review the changes in the feature-1 branch:
> az repos pr create --title "Review Feature-1 before merging to master"
--work-items 38 39 `
93
Use the –open switch when raising the pull request to open the pull request in a web browser after it has
been created. The –deletesource-branch switch can be used to delete the branch after the pull request is
complete. Also consider using –auto-complete to complete automatically when all policies have passed,
and the source branch can be merged into the target branch.
The team jointly reviews the code changes and approves the pull request:
The master is ready to release, team tags master branch with the release number:
6. Start work on Feature 2. Create a branch on remote from the master branch and do the checkout
locally:
myWebApp> git push origin origin:refs/heads/feature/myFeature-2
7. Modify Program.cs by changing the same comment line in the code that was changed in feature-1
public class Program
{
// Editing the same line (file from feature-2 branch)
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}
8. Commit the changes locally, push to the remote repository, and then raise a pull8. request to merge
the changes from feature/myFeature-2 to the master branch:
> az repos pr create --title "Review Feature-2 before merging to master"
--work-items 40 42 `
-d "#Merge feature-2 to master" `
-s feature/myFeature-2 -t master -r myWebApp -p
$prj -i $1
With the pull request in flight, a critical bug is reported in production against the feature-1 release. To
investigate the issue, you need to debug against the version of code currently deployed in production. To
investigate the issue, create a new fof branch using the release_feature1 tag:
myWebApp> git checkout -b fof/bug-1 release_feature1
Switched to a new branch 'fof/bug-1'
9. Modify Program.cs by changing the same line of code that was changed in the feature-1 release:
public class Program
{
// Editing the same line (file from feature-FOF branch)
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}
10. Stage and commit the changes locally, then push changes to the remote repository:
95
11. Immediately after the changes have been rolled out to production, tag the fof\bug-1 branch with the
release_bug-1 tag, then raise a pull request to merge the changes from fof/bug-1 back into the
master:
> az repos pr create --title "Review Bug-1 before merging to master"
--work-items 100 `
-d "#Merge Bug-1 to master" `
-s fof/Bug-1 -t master -r myWebApp -p
$prj -i $i
As part of the pull request, the branch is deleted, however, you can still reference the full history to that
point using the tag.
With the critical bug fix out of the way, let's go back to the review of the feature-2 pull request. The
branches page makes it clear that the feature/myFeature-2 branch is one change ahead of the master
and two changes behind the master:
If you tried to approve the pull request, you'll see an error message informing you of a merge conflict:
96
12. The Git Pull Request Merge Conflict resolution extension makes it possible to resolve merge conflicts
right in the browser. Navigate to the conflicts tab and click on Program.cs to resolve the merge
conflicts:
The user interface gives you the option to take the source version, target version, or add custom changes
and review and submit the merge. With the changes merged, the pull request is completed.
How it works
In this recipe, we learned how the Git branching model gives you the flexibility to work on features in par-
allel by creating a branch for each feature. The pull request workflow allows you to review code changes
using the branch policies. Git tags are a great way to record milestones, such as the version of code
released; tags give you a way to create branches from tags. We were able to create a branch from a
previous release tag to fix a critical bug in production. The branches view in the web portal makes it easy
to identify branches that are ahead of the master, and forces a merge conflict if any ongoing pull re-
quests try to merge to the master without first resolving the merge conflicts. A lean branching model,
such as this, allows you to create short-lived branches and push quality changes to production faster.
97
Getting started
Gitflow is just an abstract idea of a Git workflow. This means it dictates what kind of branches to set up
and how to merge them together. We will touch on the purposes of the branches below. The git-flow
toolset is an actual command line tool that has an installation process. The installation process for
git-flow is straightforward. Packages for git-flow are available on multiple operating systems. On OSX
systems, you can execute brew install git-flow. On windows you will need to download and install
git-flow. After installing git-flow you can use it in your project by executing git flow init. Git-flow is a
wrapper around Git. The git flow init command is an extension of the default git init command and
doesn't change anything in your repository other than creating branches for you.
How it works
This branch will contain the complete history of the project, whereas master will contain an abridged
version. Other developers should now clone the central repository and create a tracking branch for
develop.
When using the git-flow extension library, executing git flow init on an existing repo will create the
develop branch:
Initialized empty Git repository in ~/project/.git/
No branches exist yet. Base branches must be created now.
Branch name for production releases: [master]
Branch name for "next release" development: [develop]
$ git branch
* develop
master
Feature branches
Each new feature should reside in its own branch, which can be pushed to the central repository for
backup/collaboration. But, instead of branching off master, feature branches use develop as their parent
branch. When a feature is complete, it gets merged back into develop. Features should never interact
directly with master.
99
Note that feature branches combined with the develop branch is, for all intents and purposes, the Feature
Branch Workflow. But the Gitflow Workflow doesn’t stop there.
Feature branches are generally created off to the latest develop branch.
Creating a feature branch
Without the git-flow extensions:
git checkout develop
git checkout -b feature_branch
Continue your work and use Git like you normally would.
Finishing a feature branch
When you’re done with the development work on the feature, the next step is to merge the feature_
branch into develop.
Without the git-flow extensions:
git checkout develop
git merge feature_branch
Release branches
Once develop has acquired enough features for a release (or a predetermined release date is approach-
ing), you fork a release branch off develop. Creating this branch starts the next release cycle, so no new
100
features can be added after this point—only bug fixes, documentation generation, and other release-ori-
ented tasks should go in this branch. Once it's ready to ship, the release branch gets merged into master
and tagged with a version number. In addition, it should be merged back into develop, which may have
progressed since the release was initiated.
Using a dedicated branch to prepare releases makes it possible for one team to polish the current release
while another team continues working on features for the next release. It also creates well-defined phases
of development (e.g., it's easy to say, “This week we're preparing for version 4.0,” and to see it in the
structure of the repository).
Making release branches is another straightforward branching operation. Like feature branches, release
branches are based on the develop branch. A new release branch can be created using the following
methods.
Without the git-flow extensions:
git checkout develop
git checkout -b release/0.1.0
``` cmd
``` cmd
$ git flow release start 0.1.0
Switched to a new branch 'release/0.1.0'
Once the release is ready to ship, it will get merged it into master and develop, then the release branch
will be deleted. It’s important to merge back into develop because critical updates may have been added
to the release branch and they need to be accessible to new features. If your organization stresses code
review, this would be an ideal place for a pull request.
To finish a release branch, use the following methods:
Without the git-flow extensions:
git checkout develop
git merge release/0.1.0
Maintenance or “hotfix” branches are used to quickly patch production releases. Hotfix branches are a lot
like release branches and feature branches except they're based on master instead of develop. This is the
only branch that should fork directly off master. As soon as the fix is complete, it should be merged into
both master and develop (or the current release branch), and master should be tagged with an updated
version number.
Having a dedicated line of development for bug fixes lets your team address issues without interrupting
the rest of the workflow or waiting for the next release cycle. You can think of maintenance branches as
ad hoc release branches that work directly with master. A hotfix branch can be created using the follow-
ing methods:
Without the git-flow extensions:
git checkout master
git checkout -b hotfix_branch
Like finishing a release branch, a hotfix branch gets merged into both master and develop.
git checkout master
git merge hotfix_branch
git checkout develop
git merge hotfix_branch
git branch -D hotfix_branch
$ git flow hotfix finish hotfix_branch
102
Forking workflow
The forking workflow is fundamentally different than other popular Git workflows. Instead of using a
single server-side repository to act as the “central” codebase, it gives every developer their own serv-
er-side repository. This means that each contributor has not one, but two Git repositories: a private local
one and a public server-side one. The forking workflow is most often seen in public open-source projects.
The main advantage of the forking workflow is that contributions can be integrated without the need for
everybody to push to a single central repository. Developers push to their own server-side repositories,
and only the project maintainer can push to the official repository. This allows the maintainer to accept
commits from any developer without giving them write access to the official codebase.
The forking workflow typically follows a branching model based on the Gitflow workflow. This means that
complete feature branches will be purposed for merge into the original project maintainer's repository.
The result is a distributed workflow that provides a flexible way for large, organic teams (including
untrusted third parties) to collaborate securely. This also makes it an ideal workflow for open-source
projects.
How it works
As in the other Git workflows, the forking workflow begins with an official public repository stored on a
server. But when a new developer wants to start working on the project, they do not directly clone the
official repository.
Instead, they fork the official repository to create a copy of it on the server. This new copy serves as their
personal public repository—no other developers can push to it, but they can pull changes from it (we’ll
see why this is important in a moment). After they have created their server-side copy, the developer
performs a git clone to get a copy of it onto their local machine. This serves as their private development
environment, just like in the other workflows.
When they're ready to publish a local commit, they push the commit to their own public repository—not
the official one. Then, they file a pull request with the main repository, which lets the project maintainer
know that an update is ready to be integrated. The pull request also serves as a convenient discussion
thread if there are issues with the contributed code. The following is a step-by-step example of this
workflow.
●● A developer ‘forks’ an 'official' server-side repository. This creates their own server-side copy.
103
Forking vs cloning
It's important to note that “forked” repositories and "forking" are not special operations. Forked reposito-
ries are created using the standard git clone command. Forked repositories are generally “server-side
clones” and usually managed and hosted by a Git service provider such as Azure Repos. There is no
unique Git command to create forked repositories. A clone operation is essentially a copy of a repository
and its history.
104
Getting ready
The out-of-the-box branch policies include several policies, such as build validation and enforcing a
merge strategy. In this recipe, we'll only focus on the branch policies needed to set up a code-review
workflow.
How to do it
1. Open the branches view for the myWebApp Git repository in the parts unlimited team portal. Select
the master branch, and from the pull-down context menu choose Branch policies:
106
3. This presents the out-of-the-box policies. Check this option to select a minimum number of reviewers.
Set the minimum number of reviewers to 1 and check the option to reset the code reviewer's votes
when there are new changes:
The Allow users to approve their own changes option allows the submitter to self-approve their changes.
This is OK for mature teams, where branch policies are used as a reminder for the checks that need to be
performed by the individual.
4. Use the review policy in conjunction with the comment-resolution policy. This allows you to enforce
that the code review comments are resolved before the changes are accepted. The requester can take
the feedback from the comment and create a new work item and resolve the changes, this at least
guarantees that code review comments aren't just lost with the acceptance of the code into the
master branch:
107
5. A code change in the team project is instigated by a requirement, if the work item that triggered the
work isn't linked to the change, it becomes hard to understand why the changes were made over
time. This is especially useful when reviewing the history of changes. Configure the Check for linked
work items policy to block changes that don't have a work item linked to them:
6. Select the option to automatically add code reviewers when a pull request is raised. You can map
which reviewers are added based on the area of the code being changed:
How it works
With the branch policies in place, the master branch is now fully protected. The only way to push changes
to the master branch is by first making the changes in another branch and then raising a pull request to
trigger the change-acceptance workflow. From one of the existing user stories in the work item hub,
choose to create a new branch. By creating a new branch from a work item, that work item automatically
gets linked to the branch, you can optionally include more than one work item with a branch as part of
the create workflow:
108
Prefix in the name when creating the branch to make a folder for the branch to go in. In the preceding
example, the branch will go in the folder. This is a great way to organise branches in busy environments.
With the newly created branch selected in the web portal, edit the HomeController.cs file to include the
following code snippet and commit the changes to the branch. In the image below you'll see that after
editing the file, you can directly commit the changes by clicking the commit button.
The file path control in team portal supports search. Start typing the file path to see all files in your Git
repository under that directory starting with these letters show up in the file path search results drop-
down.
The code editor in web portal has several new features in Azure DevOps Server 2018, such as support for
bracket matching and toggle white space.You can load the command palette by pressing . Among many
other new options, you can now toggle the file using a file mini-map, collapse and expand, as well as
other standard operations.
To push these changes from the new branch into the master branch, create a pull request from the pull
request view. Select the new branch as the source and the master as the target branch. The new pull
request form supports markdown, so you can add the description using the markdown syntax. The
description window also supports @ mentions and # to link work items:
109
The pull request is created; the overview page summarizes the changes and the status of the policies. The
Files tab shows you a list of changes along with the difference between the previous and the current
versions. Any updates pushed to the code files will show up in the updates tab, and a list of all the
commits is shown under the Commits tab:
Open the Files tab: this view supports code comments at the line level, file level, and overall. The com-
ments support both @ for mentions and # to link work items, and the text supports markdown syntax:
The code comments are persisted in the pull request workflow; the code comments support multiple
iterations of reviews and work well with nested responses. The reviewer policy allows for a code review
workflow as part of the change acceptance. This is a great way for the team to collaborate on any code
changes being pushed into the master branch. When the required number of reviewers approve the pull
request, it can be completed. You can also mark the pull request to auto-complete after your review, this
auto-completes the pull requests once all the policies have been successfully compiled to.
There's more
Have you ever been in a state where a branch has been accidentally deleted? It can be difficult to figure
out what happened. Azure DevOps Server now supports searching for deleted branches. This helps you
understand who deleted it and when, the interface also allows you to recreate the branch it if you wish.
110
To cut out the noise from the search results, deleted branches are only shown if you search for them by
their exact name. To search for a deleted branch, enter the full branch name into the branch search box. It
will return any existing branches that match that text. You will also see an option to search for an exact
match in the list of deleted branches. If a match is found, you will see who deleted it and when. You can
also restore the branch. Restoring the branch will re-create it at the commit to which is last pointed.
However, it will not restore policies and permissions.
Using a mobile app in combination with Git is a very convenient option, particularly when urgent pull
request approvals are required.
●● The app can render markdown, images, and PDF files directly on the mobile device.
●● Pull requests can be managed within the app, along with marking files as viewed, collapsing files.
●● Comments can be added.
●● Emoji short codes are rendered.
111
Git hooks
Git hooks are a mechanism that allows arbitrary code to be run before, or after, certain Git lifecycle events
occur. For example, one could have a hook into the commit-msg event to validate that the commit
message structure follows the recommended format. The hooks can be any sort of executable code,
including shell, PowerShell, Python, or any other scripts. Or they may be a binary executable. Anything
goes! The only criteria is that hooks must be stored in the .git/hooks folder in the repo root, and that they
must be named to match the corresponding events (as of Git 2.x):
●● applypatch-msg
●● pre-applypatch
●● post-applypatch
●● pre-commit
●● prepare-commit-msg
●● commit-msg
●● post-commit
●● pre-rebase
●● post-checkout
●● post-merge
●● pre-receive
●● update
●● post-receive
●● post-update
●● pre-auto-gc
●● post-rewrite
●● pre-push
So where do I start?
Let’s start by exploring client side Git hooks. Navigate to repo.git\hooks directory. You will find that there
a bunch of samples, but they are disabled by default. For instance, if you open that folder, you will find a
file called pre-commit.sample. To enable it, just rename it to pre-commit by removing the .sample
extension and make the script executable. When you attempt to commit using git commit, the script is
found and executed. If your pre-commit script exits with a 0 (zero), you commit successfully, otherwise
the commit fails.
On Unix-like OS’s, the #! tells the program loader that this is a script to be interpreted, and /bin/sh is the
path to the interpreter you want to use, sh in this case. Windows is not a Unix-like OS. Git for Windows
supports Bash commands and shell scripts via Cygwin. By default, what does it find when it looks for sh.
exe at /bin/sh? Yup, nothing; nothing at all. Fix it by providing the path to the sh executable on your
system. I’m using the 64-bit version of Git for Windows, so my shebang line looks like this.
#!C:/Program\ Files/Git/usr/bin/sh.exe
word2|keyword3')
if [ ! -z "$matches" ]
then
cat <<\EOT
Error: Words from the blocked list were present in the diff:
EOT
echo $matches
exit 1
fi
Of course, you don’t have to build the full key word scan list in this script, you can branch off to a differ-
ent file by referring it here that you could simply encrypt or scramble if you wanted to.
The repo .git\hooks folder is not committed into source control, so you may ask how do you share the
goodness of the automated scripts you create with the team? The good news is that from Git version 2.9
you can now map Git hooks to a folder that can be committed into source control, you could do that by
simply updating the global settings configuration for your git repository.
git config --global core.hooksPath '~/.GitHooks'
If you ever need to overwrite the Git Hooks you have set up on the client side, you could do so by using
the no-verify switch.
git commit --no-verify
4 https://docs.microsoft.com/en-gb/azure/devops/service-hooks/events?view=vsts#code-pushed
114
Getting ready
Let’s start by exploring client-side Git hooks. Navigate to the repo.git\hooks directory – you’ll find that
there a bunch of samples, but they are disabled by default. For instance, if you open that folder, you'll
find a file called precommit.sample. To enable it, just rename it to pre-commit by removing the .sample
extension and make the script executable. When you attempt to commit using git commit, the script is
found and executed. If your pre-commit script exits with a 0 (zero), you commit successfully, otherwise,
the commit fails.
If you are using Windows, simply renaming the file won't work. Git will fail to find the shell in the desig-
nated path as specified in the script. The problem is lurking in the first line of the script, the shebang
declaration:
#!/bin/sh
On Unix-like OSes, the #! tells the program loader that this is a script to be interpreted, and /bin/sh is the
path to the interpreter you want to use, sh in this case. Windows is not a Unix-like OS. Git for Windows
supports Bash commands and shell scripts via Cygwin. By default, what does it find when it looks for sh.
exe at /bin/sh? Yup, nothing; nothing at all. Fix it by providing the path to the sh executable on your
system. I'm using the 64-bit version of Git for Windows, so my shebang line looks like this:
#!C:/Program\ Files/Git/usr/bin/sh.exe
How to do it
How could Git hooks stop you from accidentally leaking Amazon AWS access keys to GitHub? You can
invoke a script at pre-commit using Git hooks to scan the increment of code being committed into your
local repository for specific keywords:
1. Replace the code in this pre-commit shell file with the following code.
#!C:/Program\ Files/Git/usr/bin/sh.exe matches=$(git diff-index --patch HEAD
| grep '^+' | grep -Pi 'password|keyword2|keyword3') if [ ! -z "$matches" ]
then cat <<\EOT Error: Words from the blocked list were present in the
diff: EOT echo $matches exit 1 fi
You don't have to build the full keyword scan list in this script, you can branch off to a different file by
referring it here that you could simply encrypt or scramble if you wanted to.
How it works
In the script, Git diff-index is used to identify the code increment being committed. This increment is then
compared against the list of specified keywords. If any matches are found, an error is raised to block the
commit; the script returns an error message with the list of matches. In this case, the pre-commit script
doesn't return 0 (zero), which means the commit fails.
There's more
The repo.git\hooks folder is not committed into source control, so you may wonder how you share the
goodness of the automated scripts you create with the team. The good news is that, from Git version 2.9,
115
you now can map Git hooks to a folder that can be committed into source control. You could do that by
simply updating the global settings configuration for your Git repository:
git config --global core.hooksPath '~/.githooks'
If you ever need to overwrite the Git hooks you have set up on the client side, you can do so by using the
no-verify switch:
git commit --no-verify
116
Inner source
Inner source – sometimes called “internal open source” – brings all the benefits of open-source software
development inside your firewall. It opens your software development processes so that your developers
can easily collaborate on projects across your company, using the same processes that are popular
throughout the open-source software communities. But it keeps your code safe and secure within your
organization.
Microsoft uses the inner source approach heavily. As part of the efforts to standardize on one engineer-
ing system throughout the company – backed by Azure Repos – Microsoft has also opened the source
code to all our projects to everyone within the company.
Before the move to inner source, Microsoft was “siloed”: only engineers working on Windows could read
the Windows source code. Only developers working on Office could look at the Office source code. So, if
you were an engineer working on Visual Studio and you thought that you had found a bug in Windows
or Office – or wanted to add a new feature – you were simply out of luck. But by moving to offer inner
source throughout the company, powered by Azure Repos, it’s easy to fork a repository to contribute
back. As an individual making the change you don’t need write access to the original repository, just the
ability to read it and create a fork.
What's in a fork?
A fork starts with all the contents of its upstream (original) repository. When you create a fork, you can
choose whether to include all branches or limit to only the default branch. None of the permissions,
117
policies, or build pipelines are applied. The new fork acts as if someone cloned the original repository,
then pushed to a new, empty repository. After a fork has been created, new files, folders, and branches
are not shared between the repositories unless a Pull Request (PR) carries them along.
Note - You must have the Create Repository permission in your chosen project to create a fork. We
recommend you create a dedicated project for forks where all contributors have the Create Repository
permission. For an example of granting this permission, see Set Git repository permissions.
Important - Anyone with the Read permission can open a PR to upstream. If a PR build pipeline is
configured, the build will run against the code introduced in the fork.
The forking workflow lets you isolate changes from the main repository until you're ready to integrate
them. When you're ready, integrating code is as easy as completing a pull request.
120
Getting ready
A fork starts with all the contents of its upstream (original) repository. When you create a fork in the
Azure DevOps Server, you can choose whether to include all branches or limit to only the default branch.
A fork doesn't copy the permissions, policies, or build definitions of the repository being forked. After a
fork has been created, the newly created files, folders, and branches are not shared between the reposito-
ries unless you start a pull request. Pull requests are supported in either direction: from fork to upstream,
or upstream to fork. The most common direction for a pull request will be from fork to upstream.
How to do it
1. Choose the Fork button (1), and then select the project where you want the fork to be created (2).
Give your fork a name and choose the Fork button (3).
121
2. Once your fork is ready, clone it using the command line or an IDE, such as Visual Studio. The fork will
be your origin remote. For convenience, you'll want to add the upstream repository (where you forked
from) as a remote named upstream. On the command line, type the following:
git remote add upstream {upstream_url}
3. It's possible to work directly in the master – after all, this fork is your personal copy of the repo. We
recommend you still work in a topic branch, though. This allows you to maintain multiple independent
workstreams simultaneously. Also, it reduces confusion later when you want to sync changes into your
fork. Make and commit your changes as you normally would. When you're done with the changes,
push them to origin (your fork).
4. Open a pull request from your fork to the upstream. All the policies, required reviewers, and builds will
be applied in the upstream repo. Once all the policies are satisfied, the PR can be completed, and the
changes become a permanent part of the upstream repo:
122
5. When your PR is accepted into upstream, you'll want to make sure your fork reflects the latest state of
the repo. We recommend rebasing on the upstream's master branch (assuming the master is the main
development branch). On the command line, run the following:
git fetch upstream master
git rebase upstream/master
git push origin
How it works
The forking workflow lets you isolate changes from the main repository until you're ready to integrate
them. When you're ready, integrating code is as easy as completing a pull request.
For more information, see:
●● Clone an Existing Git repo5
●● Azure Repos Git Tutorial6
5 https://docs.microsoft.com/en-us/azure/devops/repos/git/clone?view=azure-devops&tabs=visual-studio
6 https://docs.microsoft.com/en-us/azure/devops/repos/git/gitworkflow?view=azure-devops
123
Shallow clone
If developers do not need all the available history in their local repositories, a good option is to imple-
ment a shallow clone. This saves both space on local development systems, and the time it takes to sync.
You can specify the depth of the clone that you want to execute:
git clone --depth [depth] [clone-url]
You can also achieve a reduced size clone by filtering branches, or by cloning only a single branch.
7 https://docs.github.com/en/free-pro-team@latest/github/managing-large-files/working-with-large-files
124
If you commit sensitive data (e.g., password, key) to Git, it can be removed from history. There are two
tools that are commonly used to do this:
Filter-branch
The standard built-in Git method for removing files is to use the git filter-branch command. This com-
mand rewrites your repository history.
Note: the SHA hashes for your commits will then also change.
BFG Repo-Cleaner
BFG Repo-Cleaner is a commonly used open-source tool for deleting or “fixing” content in repositories. It
is easier to use than the git filter-branch command. For a single file or set of files, use the –delete-files
option:
$ bfg --delete-files file_I_should_not_have_committed
To find all the places that a file called passwords.txt exists in the repository, and replace all the text in it,
you can execute the –replace-text option:
$ bfg --replace-text passwords.txt
8 https://docs.github.com/en/free-pro-team@latest/github/managing-large-files/removing-files-from-a-repositorys-history
9 https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/removing-sensitive-data-from-a-repository
10 https://rtyley.github.io/bfg-repo-cleaner/
125
Lab
Lab 04: Version controlling with Git in Azure Re-
pos
Lab overview
Azure DevOps supports two types of version control, Git and Team Foundation Version Control (TFVC).
Here is a quick overview of the two version control systems:
●● Team Foundation Version Control (TFVC): TFVC is a centralized version control system. Typically,
team members have only one version of each file on their dev machines. Historical data is maintained
only on the server. Branches are path-based and created on the server.
●● Git: Git is a distributed version control system. Git repositories can live locally (such as on a develop-
er's machine). Each developer has a copy of the source repository on their dev machine. Developers
can commit each set of changes on their dev machine and perform version control operations such as
history and compare without a network connection.
Git is the default version control provider for new projects. You should use Git for version control in your
projects unless you have a specific need for centralized version control features in TFVC.
In this lab, you will learn how to work with branches and repositories in Azure DevOps.
Objectives
After you complete this lab, you will be able to:
●● Work with branches in Azure Repos
●● Work with repositories in Azure Repos
Lab duration
●● Estimated time: 30 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions11
11 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
126
Review Question 2
What are Git hooks?
Review Question 3
What are some best practices when working with files in Git? What do you suggest for working with large
files?
127
Answers
Review Question 1
What are the three types of branching? Select three.
■■ Trunk-based development
Toggle workflow
■■ Gitflow branching
■■ Forking workflow
Straight branching
What are Git hooks?
A mechanism that allows arbitrary code to be run before, or after, certain Git lifecycle events occur. Use Git
hooks to enforce policies, ensure consistency, and control your environment. Can be either client-side or
server-side.
What are some best practices when working with files in Git? What do you suggest for working with large
files?
Best practices: use a package management system for DLLs, library files, and other dependent files, don't
commit the binaries, logs, tracing output or diagnostic data from your builds, don't commit large, frequently
updated binary assets, and use diffable plain text formats, such as JSON, for configuration information. For
large files, use Git LFS.
Module 5 Configuring Azure Pipelines
Module overview
Azure Pipelines
Azure Pipelines is a fully featured service that is mostly used to create cross platform CI (Continuous
Integration) and CD (Continuous Deployment). It works with your preferred Git provider and can deploy
to most major cloud services, which include Azure services. Azure DevOps offers a comprehensive
Pipelines offering.
Learning objectives
After completing this module, students will be able to:
●● Explain the role of Azure Pipelines and its components
●● Configure Agents for use in Azure Pipelines
130
Test automation
Throughout this stage, the new version of an application is rigorously tested to ensure that it meets all
desired system qualities. It is important that all relevant aspects — whether functionality, security,
performance, or compliance — are verified by the pipeline. The stage may involve different types of
automated or (initially, at least) manual activities.
Deployment automation
A deployment is required every time the application is installed in an environment for testing, but the
most critical moment for deployment automation is rollout time. Since the preceding stages have verified
the overall quality of the system, this is a low-risk step. The deployment can be staged, with the new
version being initially released to a subset of the production environment and monitored before being
completely rolled out. The deployment is automated, allowing for the reliable delivery of new functionali-
ty to users within minutes, if needed.
131
Azure Pipelines
Azure Pipelines
Azure Pipelines is a cloud service that you can use to automatically build and test your code project and
make it available to other users. It works with just about any language or project type. Azure Pipelines
combines continuous integration (CI) and continuous delivery (CD) to test and build your code and ship it
to any target constantly and consistently.
Languages
You can use many languages with Azure Pipelines, such as Python, Java, PHP, Ruby, C#, and Go.
Application types
You can use Azure Pipelines with most application types, such as Java, JavaScript, Python, .NET, PHP, Go,
XCode, and C++.
Deployment targets
Use Azure Pipelines to deploy your code to multiple targets. Targets include container registries, virtual
machines, Azure services, or any on-premises or cloud target such as Microsoft Azure, Google Cloud, or
Amazon Web Services (AWS).
Package formats
To produce packages that can be consumed by others, you can publish NuGet, npm, or Maven packages
to the built-in package management repository in Azure Pipelines. You also can use any other package
management repository of your choice.
Agent
When your build or deployment runs, the system begins one or more jobs. An agent is installable
software that runs a build and/or deployment job.
Artifact
An artifact is a collection of files or packages published by a build. Artifacts are made available to subse-
quent tasks, such as distribution or deployment.
Build
A build represents one execution of a pipeline. It collects the logs associated with running the steps and
the results of running tests.
134
Continuous delivery
Continuous delivery (CD) (also known as Continuous Deployment) is a process by which code is built,
tested, and deployed to one or more test and production stages. Deploying and testing in multiple
stages helps drive quality. Continuous integration systems produce deployable artifacts, which includes
infrastructure and apps. Automated release pipelines consume these artifacts to release new versions and
fixes to existing systems. Monitoring and alerting systems run constantly to drive visibility into the entire
CD process. This process ensures that errors are caught often and early.
Continuous integration
Continuous integration (CI) is the practice used by development teams to simplify the testing and
building of code. CI helps to catch bugs or problems early in the development cycle, which makes them
easier and faster to fix. Automated tests and builds are run as part of the CI process. The process can run
on a set schedule, whenever code is pushed, or both. Items known as artifacts are produced from CI
systems. They're used by the continuous delivery release pipelines to drive automatic deployments.
Deployment target
A deployment target is a virtual machine, container, web app, or any service that's used to host the
application being developed. A pipeline might deploy the app to one or more deployment targets after
build is completed and tests are run.
Job
A build contains one or more jobs. Most jobs run on an agent. A job represents an execution boundary of
a set of steps. All the steps run together on the same agent. For example, you might build two configura-
tions - x86 and x64. In this case, you have one build and two jobs.
Pipeline
A pipeline defines the continuous integration and deployment process for your app. It's made up of steps
called tasks. It can be thought of as a script that defines how your test, build, and deployment steps are
run.
Release
When you use the visual designer, you create a release pipeline in addition to a build pipeline. A release
is the term used to describe one execution of a release pipeline. It's made up of deployments to multiple
stages.
Stage
Stages are the major divisions in a pipeline: “build the app”, "run integration tests", and “deploy to user
acceptance testing” are good examples of stages.
Task
A task is the building block of a pipeline. For example, a build pipeline might consist of build tasks and
test tasks. A release pipeline consists of deployment tasks. Each task runs a specific job in the pipeline.
135
Trigger
A trigger is something that's set up to tell the pipeline when to run. You can configure a pipeline to run
upon a push to a repository, at scheduled times, or upon the completion of another build. All these
actions are known as triggers.
136
Microsoft-hosted agent
If your pipelines are in Azure Pipelines, then you've got a convenient option to build and deploy using a
Microsoft-hosted agent. With a Microsoft-hosted agent, maintenance and upgrades are automatically
done. Each time a pipeline is run, a fresh virtual machine (instance) is provided. The virtual machine is
discarded after one use.
For many teams this is the simplest way to build and deploy. You can try it first and see if it works for your
build or deployment. If not, you can use a self-hosted agent.
A Microsoft-hosted agent has job time limits.
Self-hosted agent
An agent that you set up and manage on your own to run build and deployment jobs is a self-hosted
agent. You can use a self-hosted agent in Azure Pipelines. A self-hosted agent gives you more control to
install dependent software needed for your builds and deployments.
You can install the agent on Linux, macOS, Windows machines, or a Linux Docker container. After you've
installed the agent on a machine, you can install any other software on that machine as required by your
build or deployment jobs.
A self-hosted agent does not have job time limits.
Job types
In Azure DevOps, there are four types of jobs available:
●● Agent pool jobs
●● Container jobs
●● Deployment group jobs
●● Agentless jobs
Container jobs
These jobs are similar to Agent Pool Jobs, but they run in a container on an agent that is part of an agent
pool.
137
Agentless jobs
These are jobs that run directly on the Azure DevOps server. They do not require an agent for execution.
These are also often called Server Jobs.
138
Agent pools
Agent pools
Instead of managing each agent individually, you organize agents into agent pools. An agent pool
defines the sharing boundary for all agents in that pool. In Azure Pipelines, agent pools are scoped to the
Azure DevOps organization; so, you can share an agent pool across projects.
A project agent pool provides access to an organization agent pool. When you create a build or release
pipeline, you specify which pool it uses. Pools are scoped to your project so you can only use them across
build and release pipelines within a project.
To share an agent pool with multiple projects, in each of those projects, you create a project agent pool
pointing to an organization agent pool. While multiple pools across projects can use the same organiza-
tion agent pool, multiple pools within a project cannot use the same organization agent pool. Also, each
project agent pool can use only one organization agent pool.
1 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=vsts&tabs=yaml
2 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=vsts&tabs=yaml
3 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml
139
Azure Pipelines
In Azure Pipelines, roles are defined on each agent pool, and membership in these roles governs what
operations you can perform on an agent pool.
The All agent pools node in the Agent Pools tab is used to control the security of all organization agent
pools. Role memberships for individual organization agent pools are automatically inherited from those
of the ‘All agent pools’ node.
Roles are also defined on each organization agent pool, and memberships in these roles govern what
operations you can perform on an agent pool.
A release consumes a parallel job only when it's being actively deployed to a stage. While the release is
waiting for an approval or a manual intervention, it does not consume a parallel job.
Simple estimate
A simple rule of thumb: Estimate that you'll need one parallel job for every four to five users in your
organization.
Detailed estimate
In the following scenarios, you might need multiple parallel jobs:
●● If you have multiple teams, and if each of them requires a CI build, you'll likely need a parallel job for
each team.
●● If your CI build trigger applies to multiple branches, you'll likely need a parallel job for each active
branch.
●● If you develop multiple applications by using one organization or server, you'll likely need additional
parallel jobs: one to deploy each application at the same time.
Supported services
Non-members of a public project will have read-only access to a limited set of services, specifically:
●● Browse the code base, download code, view commits, branches, and pull requests
●● View and filter work items
●● View a project page or dashboard
●● View the project Wiki
●● Perform semantic search of the code or work items
For additional information, see Differences and limitations for non-members of a public project4.
4 https://docs.microsoft.com/en-us/azure/devops/organizations/public/feature-differences?view=azure-devops
145
Thanks to public projects capabilities, the team will be able to enable just that experience and everyone
in the community will have access to the same build results, regardless of if they are a maintainer on the
project or not.
When you're using the per-minute plan, you can run only
one job at a time.
If you run builds for more than 14 paid hours in a month, the per-minute plan might be less cost-effec-
tive than the parallel jobs model.
146
5 https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started-designer?view=vsts&tabs=new-nav
147
part identical. It is tedious to craft these pipelines via a user interface or SDK. Having the ability to define
the pipeline along with the code helps apply all principles of code sharing, reuse, templatization and code
reviews.
Azure DevOps offers you both experiences, you can either use YAML to define your pipelines or use the
visual designer to do the same. You will however find that there are more product level investments being
made to enhance the YAML pipeline experience.
When you use YAML, you define your pipeline mostly in code (a YAML file) alongside the rest of the code
for your app. When you use the visual designer, you define a build pipeline to build and test your code,
and then to publish artifacts. You also define a release pipeline to consume and deploy those artifacts to
deployment targets.
6 https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started-yaml?view=vsts
148
Lab
Lab 05: Configuring agent pools and under-
standing pipeline styles
Lab overview
YAML-based pipelines allow you to fully implement CI/CD as code, in which pipeline definitions reside in
the same repository as the code that is part of your Azure DevOps project. YAML-based pipelines support
a wide range of features that are part of the classic pipelines, such as pull requests, code reviews, history,
branching, and templates.
Regardless of the choice of the pipeline style, to build your code or deploy your solution by using Azure
Pipelines, you need an agent. An agent hosts compute resources that runs one job at a time. Jobs can be
run directly on the host machine of the agent or in a container. You have an option to run your jobs using
Microsoft-hosted agents, which are managed for you, or implementing a self-hosted agent that you set
up and manage on your own.
In this lab, you will step through the process of converting a classic pipeline into a YAML-based one and
running it first by using a Microsoft-hosted agent and then performing the equivalent task by using a
self-hosted agent.
Objectives
After you complete this lab, you will be able to:
●● implement YAML-based pipelines
●● implement self-hosted agents
Lab duration
●● Estimated time: 90 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions7
7 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
149
Review Question 2
What is a pipeline, and why is it used?
Review Question 3
What are the two types of agents and how are they different?
Review Question 4
What is an agent pool, and why would you use it?
Review Question 5
Name two ways to configure your Azure Pipelines.
150
Answers
Review Question 1
What are some advantages of Azure Pipelines? Mark all that apply.
■■ Work with any language or platform - Python, Java, PHP, Ruby, C#, and Go
■■ work with open-source projects
■■ deploy to different types of targets at the same time
■■ integrate with Azure deployments - container registries, virtual machines, Azure services, or any
on-premises or cloud target (Microsoft Azure, Google Cloud, or Amazon cloud services)
■■ build on Windows, Linux, or Mac machines
■■ integrate with GitHub
What is a pipeline, and why is it used?
A pipeline enables a constant flow of changes into production via an automated software production line.
Pipelines create a repeatable, reliable, and incrementally improving process for taking software from
concept to customer.
What are the two types of agents and how are they different?
Automatically take care of maintenance and upgrades. Each time you run a pipeline, you get a fresh virtual
machine. The virtual machine is discarded after one use. Self-hosted agents – You take care of maintenance
and upgrades. Give you more control to install dependent software needed. You can install the agent on
Linux, macOS, Windows machines, or even in a Linux Docker container.
You can organize agents into agent pools. An agent pool defines the sharing boundary. In Azure Pipelines,
agent pools are scoped to the Azure DevOps organization; so, you can share an agent pool across projects.
Module overview
Module overview
Continuous Integration is one of the key pillars of DevOps. Once you have your code in a version control
system you need an automated way of integrating the code on an ongoing basis. Azure Pipelines can be
used to create a fully featured cross platform CI and CD service. It works with your preferred Git provider
and can deploy to most major cloud services, which include Azure services.
Learning objectives
After completing this module, students will be able to:
●● Explain why continuous integration matters
●● Implement continuous integration using Azure Pipelines
152
The idea is to minimize the cost of integration by making it an early consideration. Developers can
discover conflicts at the boundaries between new and existing code early, while conflicts are still relatively
easy to reconcile. Once the conflict is resolved, work can continue with confidence that the new code
honors the requirements of the existing codebase.
Integrating code frequently does not, by itself, offer any guarantees about the quality of the new code or
functionality. In many organizations, integration is costly because manual processes are used to ensure
that the code meets standards, does not introduce bugs, and does not break existing functionality.
Frequent integration can create friction when the level of automation does not match the amount quality
assurance measures in place.
To address this friction within the integration process, in practice, continuous integration relies on robust
test suites and an automated system to run those tests. When a developer merges code into the main
repository, automated processes kick off a build of the new code. Afterwards, test suites are run against
the new build to check whether any integration problems were introduced. If either the build or the test
phase fails, the team is alerted so that they can work to fix the build.
The end goal of continuous integration is to make integration a simple, repeatable process that is part of
the everyday development workflow to reduce integration costs and respond to defects early. Working to
make sure the system is robust, automated, and fast while cultivating a team culture that encourages
frequent iteration and responsiveness to build issues is fundamental to the success of the strategy.
153
1 https://git-scm.com/
2 https://subversion.apache.org/
3 https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/overview?view=vsts
4 https://www.nuget.org/
5 https://www.npmjs.com/
6 https://chocolatey.org/
7 https://brew.sh/
8 http://rpm.org/
9 https://azure.microsoft.com/en-us/services/devops
10 https://www.jetbrains.com/teamcity/
11 https://jenkins.io/
12 http://ant.apache.org/
13 http://nant.sourceforge.net/
14 https://gradle.org/
154
15 https://docs.microsoft.com/en-us/azure/devops/learn/what-is-continuous-integration
155
In this case, the date has been retrieved as a system variable, then formatted via yyyyMMdd, and the
revision is then appended.
Build status
While we have been manually queuing each build, we will see in the next lesson that builds can be
automatically triggered. This is a key capability required for continuous integration. But there are times
that we might not want the build to run, even if it is triggered. This can be controlled with these settings:
Note that you can used the Paused setting to allow new builds to queue but to then hold off starting
them.
For more information, see Build Pipeline Options16.
16 https://docs.microsoft.com/en-us/azure/devops/pipelines/build/options?view=vsts&tabs=yaml
156
The authorization scope determines whether the build job is limited to accessing resources in the current
project, or if it can access resources in other projects in the project collection.
The build job timeout determines how long the job can execute before being automatically canceled. A
value of zero (or leaving the text box empty) specifies that there is no limit.
The build job cancel timeout determines how long the server will wait for a build job to respond to a
cancellation request.
Badges
Some development teams like to show the state of the build on an external monitor or website. These
settings provide a link to the image to use for that. Here is an example Azure Pipelines badge that has
Succeeded.:
17 https://docs.microsoft.com/en-us/azure/devops/pipelines/build/options?view=vsts&tabs=yaml
157
When you configure a build pipeline, as well as the agent pool to use, on the Options tab, you can
specify certain demands that the agent must meet.
158
In the above image, the HasPaymentService is required in the collection of capabilities. As well as an
exists condition, you can choose that a capability equals a specific value.
For more information, see Capabilities18.
18 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=vsts#capabilities
159
Parallel jobs
At the organization level, you can configure the number of parallel jobs that are made available.
The free tier allows for one parallel job of up to 1800 minutes per month. The self-hosted agents have
higher levels.
✔️ Note: You can define a build as a collection of jobs, rather than as a single job. Each job consumes
one of these parallel jobs that runs on an agent. If there aren't enough parallel jobs available for your
organization, the jobs will be queued and run sequentially.
160
Hello world
Let’s start off slowly and create a simple pipeline that echos “Hello world!” to the console. No technical
course is complete without a hello world example.
name: 1.0$(Rev:.r)
# equivalent trigger
# trigger:
# branches:
# include:
# - master
variables:
name: martin
pool:
vmImage: ubuntu-latest
jobs:
- job: helloworld
steps:
- script: echo "Hello, $(name)"
●● Steps – the actual tasks that need to be executed: in this case a “script” task (script is an alias) that can
run inline scripts
Name
The variable name is a bit misleading, since the name is really the build number format. If you do not
explicitly set a name format, you’ll get an integer number. This is a monotonically increasing number for
run triggered off this pipeline, starting at 1. This number is stored in Azure DevOps. You can make use of
this number by referencing $(Rev).
To make a date-based number, you can use the format $(Date:yyyy-mm-dd-HH-mm) to get a build
number like 2020-01-16-19-22. To get a semantic number like 1.0.x, you can use something like
1.0$(Rev:.r)
Triggers
If there is no explicit triggers section, then it is implied that any commit to any path in any branch will
trigger this pipeline to run. You can be more explicit though using filters such as branches and/or paths.
Let’s consider this trigger:
trigger:
branches:
include:
- master
This trigger is configured to queue the pipeline only when there is a commit to the master branch. What
about triggering for any branch except master? You guessed it: use exclude instead of include:
trigger:
branches:
exclude:
- master
TIP: You can get the name of the branch from the variables Build.SourceBranch (for the full name like
refs/heads/master) or Build.SourceBranchName (for the short name like master).
What about a trigger for any branch with a name that starts with topic/ and only if the change is in the
webapp folder?
trigger:
branches:
include:
- feature/*
paths:
include:
- webapp/**
Of course, you can mix includes and excludes if you really need to. You can also filter on tags.
TIP: Don't forget one overlooked trigger: none. If you never want your pipeline to trigger automatically,
then you can use none. This is useful if you want to create a pipeline that is only manually triggered.
163
Jobs
A job is a set of steps that are excuted by an agent in a queue (or pool). Jobs are atomic – that is, they are
executed wholly on a single agent. You can configure the same job to run on multiple agents at the same
time, but even in this case the entire set of steps in the job are run on every agent. If you need some
steps to run on one agent and some on another, you’ll need two jobs.
A job has the following attributes besides its name:
1. displayName – a friendly name
2. dependsOn - a way to specify dependencies and ordering of multiple jobs
3. condition – a binary expression: if this evaluates to true, the job runs; if false, the job is skipped
4. strategy - used to control how jobs are parallelized
5. continueOnError - to specify if the remainder of the pipeline should continue or not if this job fails
6. pool – the name of the pool (queue) to run this job on
7. workspace - managing the source workspace
8. container - for specifying a container image to execute the job in - more on this later
9. variables – variables scoped to this job
10. steps – the set of steps to execute
11. timeoutInMinutes and cancelTimeoutInMinutes for controlling timeouts
12. services - sidecar services that you can spin up
Dependencies
You can define dependencies between jobs using the dependensOn property. This lets you specify
sequences and fan-out and fan-in scenarios. If you do not explicitly define a dependency, a sequential
dependency is implied. If you truly want jobs to run in parallel, you need to specify dependsOn: none.
Let's look at a few examples. Consider this pipeline:
jobs:
- job: A
steps:
# steps omitted for brevity
- job: B
steps:
19 https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops&tabs=yaml
164
Because no dependsOn was specified, the jobs will run sequentially: first A and then B.
To have both jobs run in parallel, we just add dependsOn: none to job B:
jobs:
- job: A
steps:
# steps omitted for brevity
- job: B
dependsOn: none
steps:
# steps omitted for brevity
- job: B
dependsOn: A
steps:
- script: echo 'job B'
- job: C
dependsOn: A
steps:
- script: echo 'job C'
- job: D
dependsOn:
- B
- C
steps:
- script: echo 'job D'
- job: E
dependsOn:
- B
- D
steps:
- script: echo 'job E'
165
Checkout
Classic builds implicitly checkout any repository artifacts, but pipelines require you to be more explicit
using the checkout keyword:
●● Jobs check out the repo they are contained in automatically unless you specify checkout: none.
●● Deployment jobs do not automatically check out the repo, so you'll need to specify checkout: self for
deployment jobs if you want to get access to files in the YAML file's repo.
Download
Downloading artifacts requires you to use the download keyword. Downloads also work the opposite way
for jobs and dpeloyment jobs:
●● Jobs do not download anything unless you explicitly define a download
●● Deployment jobs implicitly perform a download: current which downloads any pipeline artifacts that
have been created in the current pipeline. To prevent this, you must specify download: none.
Resources
What if your job requires source code in another repository? You’ll need to use resources. Resources let
you reference:
1. other repositories
2. pipelines
3. builds (classic builds)
4. containers (for container jobs)
5. packages
To reference code in another repo, specify that repo in the resources section and then reference it via its
alias in the checkout step:
resources:
repositories:
166
- repository: appcode
type: git
name: otherRepo
steps:
- checkout: appcode
Variables
It would be tough to achieve any sort of sophistication in your pipelines without variables. There are
several types of variables, though this classification is partly mine and pipelines don’t distinguish between
these types. However, I’ve found it useful to categorize pipeline variables to help teams understand some
of the nuances that occur when dealing with them.
Every variable is really a key:value pair. The key is the name of the variable, and it has a value.
To dereference a variable, simply wrap the key in $(). Let’s consider this simple example:
variables:
name: martin
steps:
- script: echo "Hello, $(name)!"
Pipeline structure
A pipeline is one or more stages that describe a CI/CD process. Stages are the major divisions in a
pipeline. The stages “Build this app,” "Run these tests," and “Deploy to preproduction” are good exam-
ples.
A stage is one or more jobs, which are units of work assignable to the same machine. You can arrange
both stages and jobs into dependency graphs. Examples include “Run this stage before that one” and
"This job depends on the output of that job."
A job is a linear series of steps. Steps can be tasks, scripts, or references to external templates.
20 https://docs.microsoft.com/en-us/azure/devops/extend/develop/add-build-task?view=azure-devops
167
●● Stage A
●● Job 1
●● Step 1.1
●● Step 1.2
●● ...
●● Job 2
●● Step 2.1
●● Step 2.2
●● ...
●● Stage B
●● ...
Simple pipelines don't require all these levels. For example, in a single job build you can omit the contain-
ers for stages and jobs because there are only steps. And because many options shown in this article
aren't required and have good defaults, your YAML definitions are unlikely to include all of them.
Pipeline
The schema for a pipeline…
name: string # build numbering format
resources:
pipelines: [ pipelineResource ]
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: # several syntaxes
trigger: trigger
pr: pr
stages: [ stage | templateReference ]
If you have a single stage, you can omit the stages keyword and directly specify the jobs keyword:
# ... other pipeline-level keywords
jobs: [ job | templateReference ]
If you have a single stage and a single job, you can omit the stages and jobs keywords and directly
specify the steps keyword:
# ... other pipeline-level keywords
steps: [ script | bash | pwsh | powershell | checkout | task | templateRef-
168
erence ]
Stage
A stage is a collection of related jobs. By default, stages run sequentially. Each stage starts only after the
preceding stage is complete.
Use approval checks to manually control when a stage should run. These checks are commonly used to
control deployments to production environments.
Checks are a mechanism available to the resource owner. They control when a stage in a pipeline con-
sumes a resource. As an owner of a resource like an environment, you can define checks that are required
before a stage that consumes the resource can start.
This example runs three stages, one after another. The middle stage runs two jobs in parallel.
stages:
- stage: Build
jobs:
- job: BuildJob
steps:
- script: echo Building!
- stage: Test
jobs:
- job: TestOnWindows
steps:
- script: echo Testing on Windows!
- job: TestOnLinux
steps:
- script: echo Testing on Linux!
- stage: Deploy
jobs:
- job: Deploy
steps:
- script: echo Deploying the code!
Job
A job is a collection of steps run by an agent or on a server. Jobs can run conditionally and might depend
on earlier jobs.
jobs:
- job: MyJob
displayName: My First Job
continueOnError: true
workspace:
clean: outputs
steps:
- script: echo My first job
169
Steps
A step is a linear sequence of operations that make up a job. Each step runs in its own process on an
agent and has access to the pipeline workspace on a local hard drive. This behavior means environment
variables aren't preserved between steps, but file system changes are.
steps:
- script: echo This runs in the default shell on any machine
- bash: |
echo This multiline script always runs in Bash.
echo Even on Windows machines!
- pwsh: |
Write-Host "This multiline script always runs in PowerShell Core."
Write-Host "Even on non-Windows machines!"
Tasks
Tasks are the building blocks of a pipeline. There's a catalog of tasks available to choose from.
steps:
- task: VSBuild@1
displayName: Build
timeoutInMinutes: 120
inputs:
solution: '**\*.sln'
Templates
Template references
You can export reusable sections of your pipeline to a separate file. These separate files are known as
templates. Azure Pipelines supports four kinds of templates:
Azure Pipelines supports four kinds of templates:
●● Stage
●● Job
●● Step
●● Variable
You can also use templates to control what is allowed in a pipeline and to define how parameters can be
used.
●● Parameter
Templates themselves can include other templates. Azure Pipelines supports a maximum of 50 unique
template files in a single pipeline.
Stage templates
You can define a set of stages in one file and use it multiple times in other files.
170
In this example, a stage is repeated twice for two different testing regimes. The stage itself is specified
only once.
# File: stages/test.yml
parameters:
name: ''
testFile: ''
stages:
- stage: Test_${{ parameters.name }}
jobs:
- job: ${{ parameters.name }}_Windows
pool:
vmImage: vs2017-win2016
steps:
- script: npm install
- script: npm test -- --file=${{ parameters.testFile }}
- job: ${{ parameters.name }}_Mac
pool:
vmImage: macos-10.14
steps:
- script: npm install
- script: npm test -- --file=${{ parameters.testFile }}
Templated pipeline
# File: azure-pipelines.yml
stages:
- template: stages/test.yml # Template reference
parameters:
name: Mini
testFile: tests/miniSuite.js
Job templates
You can define a set of jobs in one file and use it multiple times in other files.
In this example, a single job is repeated on three platforms. The job itself is specified only once.
# File: jobs/build.yml
parameters:
name: ''
pool: ''
sign: false
171
jobs:
- job: ${{ parameters.name }}
pool: ${{ parameters.pool }}
steps:
- script: npm install
- script: npm test
- ${{ if eq(parameters.sign, 'true') }}:
- script: sign
# File: azure-pipelines.yml
jobs:
- template: jobs/build.yml # Template reference
parameters:
name: macOS
pool:
vmImage: 'macOS-10.14'
Step templates
You can define a set of steps in one file and use it multiple times in another file.
# File: steps/build.yml
steps:
- script: npm install
- script: npm test
# File: azure-pipelines.yml
jobs:
- job: macOS
pool:
vmImage: 'macOS-10.14'
steps:
- template: steps/build.yml # Template reference
172
- job: Linux
pool:
vmImage: 'ubuntu-16.04'
steps:
- template: steps/build.yml # Template reference
- job: Windows
pool:
vmImage: 'vs2017-win2016'
steps:
- template: steps/build.yml # Template reference
- script: sign # Extra step on Windows only
Variable templates
You can define a set of variables in one file and use it multiple times in other files.
In this example, a set of variables is repeated across multiple pipelines. The variables are specified only
once.
# File: variables/build.yml
variables:
- name: vmImage
value: vs2017-win2016
- name: arch
value: x64
- name: config
value: debug
# File: component-x-pipeline.yml
variables:
- template: variables/build.yml # Template reference
pool:
vmImage: ${{ variables.vmImage }}
steps:
- script: build x ${{ variables.arch }} ${{ variables.config }}
# File: component-y-pipeline.yml
variables:
- template: variables/build.yml # Template reference
pool:
vmImage: ${{ variables.vmImage }}
steps:
- script: build y ${{ variables.arch }} ${{ variables.config }}
173
YAML resources
Resources in YAML represent sources of pipelines, containers, repositories, and types. For more informa-
tion on Resources, see here21.
General schema
resources:
pipelines: [ pipeline ]
repositories: [ repository ]
containers: [ container ]
Pipeline resource
If you have an Azure pipeline that produces artifacts, your pipeline can consume the artifacts by using the
pipeline keyword to define a pipeline resource.
resources:
pipelines:
- pipeline: MyAppA
source: MyCIPipelineA
- pipeline: MyAppB
source: MyCIPipelineB
trigger: true
- pipeline: MyAppC
project: DevOpsProject
source: MyCIPipelineC
branch: releases/M159
version: 20190718.2
trigger:
branches:
include:
- master
- releases/*
exclude:
- users/*
Container resource
Container jobs let you isolate your tools and dependencies inside a container. The agent launches an
instance of your specified container then runs steps inside it. The container keyword lets you specify your
container images.
Service containers run alongside a job to provide various dependencies like databases.
resources:
containers:
- container: linux
image: ubuntu:16.04
21 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema
174
- container: windows
image: myprivate.azurecr.io/windowsservercore:1803
endpoint: my_acr_connection
- container: my_service
image: my_service:tag
ports:
- 8080:80 # bind container port 80 to 8080 on the host machine
- 6379 # bind container port 6379 to a random available port on the
host machine
volumes:
- /src/dir:/dst/dir # mount /src/dir on the host into /dst/dir in the
container
Repository resource
If your pipeline has templates in another repository, or if you want to use multi-repo checkout with a
repository that requires a service connection, you must let the system know about that repository. The
repository keyword lets you specify an external repository.
resources:
repositories:
- repository: common
type: github
name: Contoso/CommonTools
endpoint: MyContosoServiceConnection
●● If there is a single checkout: none step, no repositories are synced or checked out.
●● If there is a single checkout: self step, the current repository is checked out.
●● If there is a single checkout step that isn't self or none, that repository is checked out instead of self.
●● If there are multiple checkout steps, each designated repository is checked out to a folder named
after the repository, unless a different path is specified in the checkout step. To check out self as one
of the repositories, use checkout: self as one of the checkout steps.
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- checkout: self
- checkout: MyGitHubRepo
- checkout: MyBitBucketRepo
- checkout: MyAzureReposGitRepository
If the self repository is named CurrentRepo, the script command produces the following output: Curren-
tRepo MyAzureReposGitRepo MyBitBucketRepo MyGitHubRepo. In this example, the names of the
repositories are used for the folders, because no path is specified in the checkout step.
176
The default branch is checked out unless you designate a specific ref.
If you are using inline syntax, designate the ref by appending @ref. For example:
- checkout: git://MyProject/MyRepo@features/tools # checks out the fea-
tures/tools branch
- checkout: git://MyProject/MyRepo@refs/heads/features/tools # also checks
out the features/tools branch
- checkout: git://MyProject/MyRepo@refs/tags/MyTag # checks out the commit
referenced by MyTag.
177
Here is a common communication pattern between the agent and Azure Pipelines.
The user registers an agent with Azure Pipelines by adding it to an agent pool. You need to be an agent
pool administrator to register an agent in that agent pool. The identity of agent pool administrator is
needed only at the time of registration and is not persisted on the agent, nor is used in any further
communication between the agent and Azure Pipelines. Once the registration is complete, the agent
downloads a listener OAuth token and uses it to listen to the job queue.
Periodically, the agent checks to see if a new job request has been posted for it in the job queue in Azure
Pipelines. When a job is available, the agent downloads the job as well as a job-specific OAuth token. This
token is generated by Azure Pipelines for the scoped identity specified in the pipeline. That token is short
lived and is used by the agent to access resources (e.g., source code) or modify resources (e.g., upload
test results) on Azure Pipelines within that job.
Once the job is completed, the agent discards the job-specific OAuth token and goes back to checking if
there is a new job request using the listener OAuth token.
The payload of the messages exchanged between the agent and Azure Pipelines are secured using
asymmetric encryption. Each agent has a public-private key pair, and the public key is exchanged with the
server during registration. The server uses the public key to encrypt the payload of the job before
sending it to the agent. The agent decrypts the job content using its private key. This is how secrets
stored in build pipelines, release pipelines, or variable groups are secured as they are exchanged with the
agent.
If your on-premises environments do not have connectivity to a Microsoft-hosted agent pool (which is
typically the case due to intermediate firewalls), you'll need to manually configure a self-hosted agent on
on-premises computer(s). The agents must have connectivity to the target on-premises environments,
and access to the Internet to connect to Azure Pipelines or Team Foundation Server, as shown in the
following diagram.
Other considerations
Authentication
To register an agent, you need to be a member of the administrator role in the agent pool. The identity of
agent pool administrator is needed only at the time of registration and is not persisted on the agent and
is not used in any subsequent communication between the agent and Azure Pipelines. In addition, you
must be a local administrator on the server to configure the agent. Your agent can authenticate to Azure
Pipelines or TFS using one of the following methods:
from the credentials that you use when you register the agent with Azure Pipelines. The choice of agent
account depends solely on the needs of the tasks running in your build and deployment jobs.
For example, to run tasks that use Windows authentication to access an external service, you must run
the agent using an account that has access to that service. However, if you are running UI tests such as
Selenium or Coded UI tests that require a browser, the browser is launched in the context of the agent
account.
After you've configured the agent, we recommend you first try it in interactive mode to make sure it
works. Then, for production use, we recommend you run the agent in one of the following modes so that
it reliably remains in a running state. These modes also ensure that the agent starts automatically if the
machine is restarted.
As a service. You can leverage the service manager of the operating system to manage the lifecycle of the
agent. In addition, the experience for auto-upgrading the agent is better when it is run as a service.
As an interactive process with auto-logon enabled. In some cases, you might need to run the agent
interactively for production use - such as to run UI tests. When the agent is configured to run in this
mode, the screen saver is also disabled. Some domain policies may prevent you from enabling auto-log-
on or disabling the screen saver. In such cases, you may need to seek an exemption from the domain
policy or run the agent on a workgroup computer where the domain policies do not apply.
Note: There are security risks when you enable automatic logon or disable the screen saver because you
enable other users to walk up to the computer and use the account that automatically logs on. If you
configure the agent to run in this way, you must ensure the computer is physically protected; for exam-
ple, located in a secure facility. If you use Remote Desktop to access the computer on which an agent is
running with auto-logon, simply closing the Remote Desktop causes the computer to be locked and any
UI tests that run on this agent may fail. To avoid this, use the tscon command to disconnect from Remote
Desktop. For example:
%windir%\System32\tscon.exe 1 /dest:console
22 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=vsts
23 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/proxy?view=vsts&tabs=windows
182
Labs
Lab 06a: Enabling continuous integration with
Azure Pipelines
Lab overview
In this lab, you will learn how to configure continuous integration (CI) and continuous deployment (CD)
for your applications using Build and Release in Azure Pipelines. This scriptable CI/CD system is both
web-based and cross-platform, while also providing a modern interface for visualizing sophisticated
workflows. Although we won’t demonstrate all the cross-platform possibilities in this lab, it is important
to point out that you can also build for iOS, Android, Java (using Ant, Maven, or Gradle) and Linux.
Objectives
After you complete this lab, you will be able to:
●● Create a basic build pipeline from a template
●● Track and review a build
●● Invoke a continuous integration build
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions24
24 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
183
source projects are already using Azure Pipelines for CI/CD, such as Atom, CPython, Pipenv, Tox, Visual
Studio Code, and TypeScript-and the list is growing every day.
In this lab, you’ll see how easy it is to set up Azure Pipelines with your GitHub projects and how you can
start seeing benefits immediately.
Objectives
After you complete this lab, you will be able to:
●● Install Azure Pipelines from the GitHub Marketplace
●● Integrate a GitHub project with an Azure DevOps pipeline
●● Track pull requests through the pipeline
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions25
25 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
184
Review Question 2
You want to take your build server offline to make a configuration change. You want it to complete any
build that it is currently processing, but you want to queue any new build requests. What should you do?
Review Question 3
You want to set a maximum time that builds can run for. Builds should not run for more than 5 minutes.
What configuration change should you make?
185
Answers
Name the four pillars of continuous integration.
Continuous Integration relies on four key elements for successful implementation: a Version Control System,
Packet Management System, Continuous Integration System, and an Automated Build Process.
You want to take your build server offline to make a configuration change. You want it to complete any
build that it is currently processing, but you want to queue any new build requests. What should you do?
You should pause the build. A paused build will not start new builds and will queue any new build requests.
You want to set a maximum time that builds can run for. Builds should not run for more than 5 minutes.
What configuration change should you make?
You should change the build job timeout setting to 5 minutes. A blank value means unlimited.
Module 7 Managing Application Configuration
and Secrets
Module overview
Module overview
Gone are the days of tossing a build over the wall and hoping that it works in production. Now develop-
ment and operations are joined together as one in DevOps. DevOps accelerates the velocity with which
products are deployed to customers. However, the catch with DevOps is that it moves fast, and security
must move faster to keep up and make an impact. When products were built under the waterfall process,
the release cycle was measured in years, so security process could take almost as long as it wanted. Face
it, DevOps is here to stay, and it is not getting any slower. Application security must speed up to keep
pace with the speed of business. Security automation is king under DevOps.
Learning objectives
After completing this module, students will be able to:
●● Manage application configuration and secrets
●● Integrate Azure Key Vault with a pipeline
188
Introduction to security
Introduction to security
While a DevOps way of working enables development teams to deploy applications faster, going faster
over a cliff doesn’t really help! Thanks to the cloud, DevOps teams have access to unprecedented infra-
structure and scale. But that also means they can be approached by some of the most nefarious actors on
the internet, as they risk the security of their business with every application deployment. Perimeter-class
security is no longer viable in such a distributed environment, so now companies need to adapt a more
micro-level security across application and infrastructure and have multiple lines of defence.
With continuous integration and continuous delivery, how do you ensure your applications are secure
and stay secure? How can you find and fix security issues early in the process? This begins with practices
commonly referred to as DevSecOps. DevSecOps incorporates the security team and their capabilities
into your DevOps practices making security a responsibility of everyone on the team.
Security needs to shift from an afterthought to being evaluated at every step of the process. Securing
applications is a continuous process that encompasses secure infrastructure, designing an architecture
with layered security, continuous security validation, and monitoring for attacks.
189
Security is everyone’s responsibility and needs to be looked at holistically across the application life cycle.
In this module we’ll discuss practical examples using real code for automating security tasks. We’ll also
see how continuous integration and deployment pipelines can accelerate the speed of security teams and
improve collaboration with software development teams.
vulnerabilities. The OWASP organization (Open Web Application Security Project) lists injections in their
OWASP Top 10 2017 document as the number one threat to web application security.
In this tutorial we will simulate a SQL injection attack.
Getting started
●● Use the SQL Injection ARM template here1 to provision a web app and a SQL database with known
SQL injection vulnerability.
●● Ensure you can browse to the ‘Contoso Clinic’ web app provisioned in your SQL injection resource
group.
How it works
1. Navigate to the Patients view and in the search box type "'" and hit enter. You will see an error page
with SQL exception indicating that the search box is feeding the text into a SQL statement.
The helpful error message is enough to guess that the text in the search box is being appended into the
SQL statement.
2. Next try passing a SQL statement 'AND FirstName = 'Kim'-- in the search box. You will see that
the results in the list below are filtered down to only show the entry with firstname Kim.
1 https://azure.microsoft.com/en-us/resources/templates/101-sql-injection-attack-prevention/
191
3. You can try to order the list by SSN by using this statement in the search box 'order by SSN--.
4. Now for the finale run this drop statement to drop the table that holds the information being dis-
played in this page… 'AND 1=1; Drop Table Patients --. Once the operation is complete, try
and load the page. You'll see that the view errors out with an exception indicating that the dbo.
patients table cannot be found.
There's more
The Azure security center team has other playbooks2 you can look at to learn how vulnerabilities are
exploited to trigger a virus attack and a DDoS attack.
2 https://azure.microsoft.com/en-gb/blog/enhance-your-devsecops-practices-with-azure-security-center-s-newest-playbooks/
192
Getting started
●● Download and install the Threat Modelling tool5
How to do it
1. Launch the Microsoft Threat Modelling Tool and choose the option to Create a Model.
3 https://docs.microsoft.com/en-us/azure/security/azure-security-threat-modeling-tool-feature-overview
4 https://blogs.msdn.microsoft.com/secdevblog/2018/09/12/microsoft-threat-modeling-tool-ga-release/
5 https://aka.ms/threatmodelingtool
194
2. From the right panel search and add Azure App Service Web App, Azure SQL Database, link
them up to show a request and response flow as demonstrated below.
3. From the toolbar menu select View -> Analysis view, the analysis view will show you a full list of
threats categorised by severity.
4. To generate a full report of the threats, from the toolbar menu select Reports -> Create full report,
select a location to save the report.
A full report is generated with details of the threat along with the SLDC phase it applies to as well as
possible mitigation and links to more details.
195
There's more
You can find a full list of threats used in the threat modelling tool here6
6 https://docs.microsoft.com/en-us/azure/security/develop/threat-modeling-tool-threats
196
Continuous integration
The CI build should be executed as part of the pull request (PR-CI) process and once the merge is
complete. Typically, the primary difference between the two runs is that the PR-CI process doesn't need
to do any of the packaging/staging that is done in the CI build. These CI builds should run static code
analysis tests to ensure that the code is following all rules for both maintenance and security. Several
tools can be used for this, such as one of the following:
●● SonarQube
●● Visual Studio Code Analysis and the Roslyn Security Analyzers
●● Checkmarx - A Static Application Security Testing (SAST) tool
●● BinSkim - A binary static analysis tool that provides security and correctness results for Windows
portable executables
●● and many more
Many of the tools seamlessly integrate into the Azure Pipelines build process. Visit the Visual Studio
Marketplace for more information on the integration capabilities of these tools.
197
In addition to code quality being verified with the CI build, two other tedious or ignored validations are
scanning 3rd party packages for vulnerabilities and OSS license usage. Often when we ask about 3rd
party package vulnerabilities and the licenses, the response is fear or uncertainty. Those organizations
that are trying to manage 3rd party packages vulnerabilities and/or OSS licenses, explain that their
process for doing so is tedious and manual. Fortunately, there are a couple of tools by WhiteSource
Software that can make this identification process almost instantaneous.
In a later module, we will discuss the integration of several useful and commonly used security and
compliance tools.
198
Example
It is 2:00 AM. Adam is done making all changes to his super awesome code piece, The tests are all
running fine. He hit commit -> push -> all commits pushed successfully to git. Happily, he drives back
home. Ten mins later he gets a call from the SecurityOps engineer, “Adam, did you push the Secret Key to
our public repo?”
YIKES!! That blah.config file Adam thinks. How could I have forgotten to include that in .gitignore? The
nightmare has already begun.
We can surely try to blame Adam here for committing the sin of checking in sensitive secrets and not
following the recommended practices of managing configuration files, but the bigger question is that if
the underlying toolchain had abstracted out the configuration management from the developer, this
fiasco would have never happened!
History
The virus was injected a long time ago.
Since the early days of .NET, there has been the notion of app.config and web.config files which provide a
playground for developers to make their code flexible by moving common configuration into these files.
When used effectively, these files are proven to be worthy of dynamic configuration changes. However, a
lot of time we see the misuse of what goes into these files. A common culprit is how samples and
documentation have been written, most samples out in the web would usually leverage these config files
for storing key elements such as ConnectionStrings, and even password. The values might be obfuscated
but what we are telling developers is that “hey, this is a great place to push your secrets!”. So, in a world
where we are preaching using configuration files, we can’t blame the developer for not managing the
governance of it. Don’t get me wrong; I am not challenging the use of Configuration here, it is an abso-
lute need of any good implementation, I am instead debating the use of multiple json, XML, yaml files in
maintaining configuration settings. Configs are great for ensuring the flexibility of the application, config
files, however, in my opinion, are a pain to manage especially across environments.
199
Separation of concerns
One of the key reasons we would want to move the configuration away from source control is to deline-
ate responsibilities. Let’s define some roles to elaborate this, none of these are new concepts but rather a
high-level summary:
●● Configuration custodian: Responsible for generating and maintaining the life cycle of configuration
values, these include CRUD on keys, ensuring the security of secrets, regeneration of keys and tokens,
defining configuration settings such as Log levels for each environment. This role can be owned by
operation engineers and security engineering while injecting configuration files through proper
DevOps processes and CI/CD implementation. Note that they do not define the actual configuration
but are custodians of their management.
●● Configuration consumer: Responsible for defining the schema (loose term) for the configuration that
needs to be in place and then consuming the configuration values in the application or library code.
This is the Dev. And Test teams, they should not be concerned about what the value of keys are rather
what the capability of the key is, for example, a developer may need different ConnectionString in the
application but does not need to know the actual value across different environments.
●● Configuration store: The underlying store that is leveraged to store the configuration, while this can
be a simple file, but in a distributed application, this needs to be a reliable store that can work across
environments. The store is responsible for persisting values that modify the behavior of the applica-
tion per environment but are not sensitive and does not require any encryption or HSM modules.
●● Secret store: While you can store configuration and secrets together, it violates our separation of
concern principle, so the recommendation is to leverage a separate store for persisting secrets. This
allows a secure channel for sensitive configuration data such as ConnectionStrings, enables the
operations team to have Credentials, Certificate, Token in one repository and minimizes the security
risk in case the Configuration Store gets compromised.
type of backing store used, and the latency of this store, it might be helpful to implement a caching
mechanism within the external configuration store. For more information, see the Caching Guidance. The
figure illustrates an overview of the External Configuration Store pattern with optional local cache.
7 https://docs.microsoft.com/en-us/azure/key-vault/key-vault-overview
201
Service Principals
Azure AD offers a variety of mechanisms for authentication. In DevOps projects though, one of the most
important is the use of Service Principals.
Azure AD applications
Applications are registered with an Azure AD tenant within Azure Active Directory. Registering an applica-
tion creates an identity configuration. You also determine who can use it:
●● Accounts in the same organizational directory
●● Accounts in any organizational directory
●● Accounts in any organizational directory and Microsoft Accounts (personal)
●● Microsoft Accounts (Personal accounts only)
8 https://docs.github.com/en/free-pro-team@latest/github/setting-up-and-managing-organizations-and-teams/enforcing-saml-single-
sign-on-for-your-organization
9 https://docs.github.com/en/free-pro-team@latest/github/setting-up-and-managing-organizations-and-teams/about-scim
205
Client secret
Once the application is created, you then should create at least one client secret for the application.
Grant permissions
The application identity can then be granted permissions within services and resources that trust Azure
Active Directory.
Service principal
To access resources, an entity must be represented by a security principal. To connect, the entity must
know:
●● TenantID
●● ApplicationID
206
●● Client Secret
For more information on Service Principals, see: App Objects and Service Principals10
The traditional answer would have been to use SQL Authentication with a username and password, but
this leaves yet another credential that needs to be managed on an ongoing basis.
10 https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals
11 https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview
207
Key-value pairs
Azure App Configuration stores configuration data as key-value pairs.
208
Keys
Keys serve as the name for key-value pairs and are used to store and retrieve corresponding values. It's a
common practice to organize keys into a hierarchical namespace by using a character delimiter, such as /
or :. Use a convention that's best suited for your application. App Configuration treats keys as a whole. It
doesn't parse keys to figure out how their names are structured or enforce any rule on them.
Keys stored in App Configuration are case-sensitive, Unicode-based strings. The keys app1 and App1 are
distinct in an App Configuration store. Keep this in mind when you use configuration settings within an
application because some frameworks handle configuration keys case-insensitively.
You can use any Unicode character in key names entered into App Configuration except for *, ,, and \.
These characters are reserved. If you need to include a reserved character, you must escape it by using \
{Reserved Character}. There's a combined size limit of 10,000 characters on a key-value pair. This
limit includes all characters in the key, its value, and all associated optional attributes. Within this limit,
you can have many hierarchical levels for keys.
Label keys
Key values in App Configuration can optionally have a label attribute. Labels are used to differentiate key
values with the same key. A key app1 with labels A and B forms two separate keys in an App Configura-
tion store. By default, the label for a key value is empty, or null.
Label provides a convenient way to create variants of a key. A common use of labels is to specify multiple
environments for the same key:
Key = AppName:DbEndpoint & Label = Test
Key = AppName:DbEndpoint & Label = Staging
Key = AppName:DbEndpoint & Label = Production
209
Values
Values assigned to keys are also Unicode strings. You can use all Unicode characters for values. There's an
optional user-defined content type associated with each value. Use this attribute to store information, for
example an encoding scheme, about a value that helps your application to process it properly.
Configuration data stored in an App Configuration store, which includes all keys and values, is encrypted
at rest and in transit. App Configuration isn't a replacement solution for Azure Key Vault. Don't store
application secrets in it.
Basic concepts
Here are several new terms related to feature management:
●● Feature flag: A feature flag is a variable with a binary state of on or off. The feature flag also has an
associated code block. The state of the feature flag triggers whether the code block runs or not.
●● Feature manager: A feature manager is an application package that handles the lifecycle of all the
feature flags in an application. The feature manager typically provides additional functionality, such as
caching feature flags and updating their states.
●● Filter: A filter is a rule for evaluating the state of a feature flag. A user group, a device or browser type,
a geographic location, and a time window are all examples of what a filter can represent.
An effective implementation of feature management consists of at least two components working in
concert:
●● An application that makes use of feature flags.
●● A separate repository that stores the feature flags and their current states.
How these components interact is illustrated in the following examples.
210
In this case, if featureFlag is set to True, the enclosed code block is executed; otherwise, it's skipped.
You can set the value of featureFlag statically, as in the following code example:
bool featureFlag = true;
You can also evaluate the flag's state based on certain rules:
bool featureFlag = isBetaUser();
A slightly more complicated feature flag pattern includes an else statement as well:
if (featureFlag) {
// This following code will run if the featureFlag value is true
} else {
// This following code will run if the featureFlag value is false
}
Lab
Lab 07: Integrating Azure Key Vault with Azure
DevOps
Lab overview
Azure Key Vault provides secure storage and management of sensitive data, such as keys, passwords, and
certificates. Azure Key Vault includes supports for hardware security modules, as well as a range of
encryption algorithms and key lengths. By using Azure Key Vault, you can minimize the possibility of
disclosing sensitive data through source code, which is a common mistake made by developers. Access
to Azure Key Vault requires proper authentication and authorization, supporting fine grained permissions
to its content.
In this lab, you will see how you can integrate Azure Key Vault with an Azure DevOps pipeline by using
the following steps:
●● create an Azure Key vault to store a MySQL server password as a secret.
●● create an Azure service principal to provide access to secrets in the Azure Key vault.
●● configure permissions to allow the service principal to read the secret.
●● configure pipeline to retrieve the password from the Azure Key vault and pass it on to subsequent
tasks.
Objectives
After you complete this lab, you will be able to:
●● Create an Azure Active Directory (Azure AD) service principal.
●● Create an Azure key vault.
●● Track pull requests through the Azure DevOps pipeline.
Lab duration
●● Estimated time: 40 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions12
12 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
213
Review Question 2
What is the Azure Key Vault and why would you use it?
214
Answers
What are the five stages of threat modeling?
Define security requirements. Create an application diagram. Identify threats. Mitigate threats. Validate that
threats have been mitigated.
What is the Azure Key Vault and why would you use it?
Azure Key Vault is a cloud key management service which allows you to create, import, store & maintain
keys and secrets used by your cloud applications. The applications have no direct access to the keys, which
helps improving the security & control over the stored keys & secrets. Use the Key Vault to centralize
application and configuration secrets, securely store secrets and keys, and monitor access and use.
Module 8 Implementing Continuous Integra-
tion with GitHub Actions
Module overview
Module Overview
GitHub Actions are the primary mechanism for automation within GitHub. They can be used for a wide
variety of purposes, but one of the most common is to implement Continuous Integration.
Learning Objectives
After completing this module, students will be able to:
●● Create and work with GitHub Actions and Workflows
●● Implement Continuous Integration with GitHub Actions
216
GitHub Actions
What are actions?
Actions are the mechanism used to provide workflow automation within the GitHub environment.
They are often used to build continuous integration (CI) and continuous deployment (CD) solutions. How-
ever, they can be used for a wide variety of tasks:
●● Automated testing
●● Automatically responding to new issues, mentions
●● Triggering code reviews
●● Handling pull requests
●● Branch management
They are defined in YAML and reside within GitHub repositories.
Actions are executed on “runners”, either hosted by GitHub, or self-hosted.
Contributed actions can be found in the GitHub Marketplace, see: Marketplace Actions1
Actions flow
GitHub tracks events that occur. Events can trigger the start of workflows. Workflows can also start on
cron-based schedules and can be triggered by events outside of GitHub. They can be manually triggered.
Workflows are the unit of automation. They contain Jobs.
1 https://github.com/marketplace?type=actions
217
Workflows
Workflows define the automation required.
They detail the events that should trigger the workflow, and they defined the jobs that should run when
the workflow is triggered.
Within the job, they define the location that the actions will run in i.e., which runner to use.
Workflows are written in YAML and live within a GitHub repository, at the location .github/workflows
Example workflow:
# .github/workflows/build.yml
name: Node Build
on: [push]
jobs:
mainbuild:
strategy:
matrix:
node-version: [12.x]
os: [windows-latest]
steps:
- uses: actions/checkout@v1
- name: Run node.js on latest Windows
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Install NPM and build
run: |
npm ci
npm run build
2 https://github.com/actions/starter-workflows
3 https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions
218
●● jobs: is the list of jobs to be executed. Workflows can contain one or more jobs.
●● runs-on: tells Actions which runner to use.
●● steps: is the list of steps for the job. Steps within a job execute on the same runner.
●● uses: tells Actions which predefined action needs to be retrieved. For example, you might have an
action that installs node.js.
●● run: tells the job to execute a command on the runner. For example, you might execute an NPM
command.
You can see the allowable syntax for workflows here: Workflow syntax for GitHub Actions4
Events
Events are implemented by the on clause in a workflow definition.
There are several types of events that can trigger workflows.
Scheduled events
With this type of trigger, a cron schedule needs to be provided.
on:
schedule:
- cron: '0 8-17 * * 1-5'
Code events
Most actions will be triggered by code events. These occur when an event of interest occurs in the reposi-
tory.
on:
pull_request
4 https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions
219
on:
[push, pull_request]
The above event would fire when either a push or a pull request occurs.
on:
pull_request:
branches:
- develop
The above event shows how to be specific about the section of the code that is relevant. In this case, it
will fire when a pull request is made in the develop branch.
Manual events
There is a special event that is used to manually trigger workflow runs. For this, you should use the
workflow_dispatch event. To use this, your workflow must be in the default branch for the repository.
Webhook events
Workflows can be executed when a GitHub webhook is called.
on:
gollum
This event would fire when someone updates (or first creates) a Wiki page.
External events
Events can be on repository_despatch. That allows events to fire from external systems.
For more information on events, see Events that trigger workflows5
Jobs
Workflows contain one or more jobs. A job is a set of steps that will be run in order on a runner.
Steps within a job execute on the same runner and share the same filesystem.
The logs produced by jobs are searchable and artifacts produced can be saved.
5 https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows
220
steps:
- run: ./build_new_server.sh
Sometimes you might need one job to wait for another job to complete. You can do that by defining
dependencies between the jobs.
jobs:
startup:
runs-on: ubuntu-latest
steps:
- run: ./setup_server_configuration.sh
build:
needs: startup
steps:
- run: ./build_new_server.sh
Note: if the startup job in the example above fails, the build job will not execute.
For more information on job dependencies, see the section Creating Dependent Jobs at Managing
complex workflows6
Runners
When you execute jobs, the steps execute on a Runner. The steps can be the execution of a shell script, or
the execution of a predefined Action.
GitHub provides several hosted runners, to avoid you needing to spin up your own infrastructure to run
actions.
At present, the maximum duration of a job is 6 hours, and for a workflow is 72 hours.
For JavaScript code, you have implementations of node.js on:
●● Windows
●● MacOS
●● Linux
If you need to use other languages, a Docker container can be used. At present, the Docker container
support is only Linux based.
These options allow you to write in whatever language you prefer. JavaScript actions will be faster (no
container needs to be used), and the runtime is more versatile. The GitHub UI is also better for working
with JavaScript actions.
Self-hosted runners
If you need different configurations to the ones provided, you can create a self-hosted runner.
GitHub has published the source code for self-hosted runners as open-source, and you can find it here:
https://github.com/actions/runner
This allows you to completely customize the runner, however, you then need to maintain (patch, up-
grade) the runner system.
6 https://docs.github.com/en/free-pro-team@latest/actions/learn-github-actions/managing-complex-workflows
221
Console output can be helpful in debugging. If it isn't sufficient, you can also enable additional logging.
See: Enabling debug logging8
7 https://docs.github.com/en/free-pro-team@latest/actions/hosting-your-own-runners/about-self-hosted-runners
8 https://docs.github.com/en/free-pro-team@latest/actions/managing-workflow-runs/enabling-debug-logging
222
Tags
Tags allow you to specify the precise versions that you want to work with.
steps:
- uses: actions/install-timer@v2.0.1
Branches
A common way to request actions is to refer to the branch that you want to work with. You'll then get the
latest version from that branch. That means you'll benefit from updates, but it also increases the chance
of the code breaking.
steps:
- uses: actions/install-timer@develop
9 https://lab.github.com/githubtraining/github-actions:-hello-world
223
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x]
steps:
- uses: actions/checkout@main
- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.x'
- run: dotnet build awesomeproject
Environment variables
When using Actions to create CI or CD workflows, you will typically need to be able to pass variable
values to the actions. This is done by using Environment Variables.
jobs:
verify-connection:
steps:
- name Verify Connection to SQL Server
run: node testconnection.js
env:
PROJECT_SERVER: PH202323V
PROJECT_DATABASE: HAMaster
For more details on environment variables, including a list of built-in environment variables, see: Environ-
ment variables10
Upload-artifact
This action can upload one or more files from your workflow to be shared between jobs.
You can upload a specific file:
- uses: actions/upload-artifact
with:
name: harness-build-log
path: bin/output/logs/harness.log
10 https://docs.github.com/en/free-pro-team@latest/actions/reference/environment-variables
225
bin/output/logs/harnessbuild.txt
Download-artifact
There is a corresponding action for downloading (or retrieving) artifacts.
- uses: actions/download-artifact
with:
name: harness-build-log
Artifact retention
A default retention period can be set for the repository, organization, or enterprise.
You can set a custom retention period when uploading, but it cannot exceed the defaults for the reposi-
tory, organization, or enterprise.
- uses: actions/upload-artifact
with:
name: harness-build-log
path: bin/output/logs/harness.log
retention-days: 12
Deleting artifacts
You can delete artifacts directly in the GitHub UI.
For details, see: Removing workflow artifacts13
Workflow badges
Badges can be used to show the status of a workflow within a repository.
They show if a workflow is currently passing or failing. While they can appear in several locations, they
typically get added to the README.md file for the repository.
Badges are added by using URLs. The URLs are formed as follows:
https://github.com/AAAAA/RRRRR/workflows/WWWWW/badge.svg
where:
●● AAAAA is the account name
●● RRRRR is the repository name
11 https://github.com/actions/upload-artifact
12 https://github.com/actions/download-artifact
13 https://docs.github.com/en/free-pro-team@latest/actions/managing-workflow-runs/removing-workflow-artifacts
226
They usually indicate the status of the default branch but can be branch specific. You do this by adding a
URL query parameter:
?branch=BBBBB
where:
●● BBBBB is the branch name.
For more details, see: Adding a workflow status badge14
Often these tags will contain version numbers, but they can contain other values.
14 https://docs.github.com/en/free-pro-team@latest/actions/managing-workflow-runs/adding-a-workflow-status-badge
227
15 https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/about-releases
228
Secrets
Secrets are similar to environment variables but encrypted. They can be created at two levels:
●● Repository
●● Organization
If secrets are created at the organization level, access policies can be used to limit the repositories that
can use them.
16 https://docs.github.com/en/free-pro-team@latest/actions/reference/encrypted-secrets
229
steps:
- name: Test Database Connectivity
with:
db_username: ${{ secrets.DBUserName }}
db_password: ${{ secrets.DBPassword }}
Limitations
Workflows can use up to 100 secrets, and they are limited to 64KB in size.
For more information on creating secrets, see: Encrypted secrets17
17 https://docs.github.com/en/free-pro-team@latest/actions/reference/encrypted-secrets
230
Lab
Lab 08: Implementing GitHub Actions by using
DevOps Starter
Lab overview
In this lab, you will learn how to implement a GitHub Action workflow that deploys an Azure web app by
using DevOps Starter.
Objectives
After you complete this lab, you will be able to:
●● Implement a GitHub Action workflow by using DevOps Starter
●● Explain the basic characteristics of GitHub Action workflows
Lab duration
●● Estimated time: 30 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions18
18 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
231
Review Question 2
Database passwords that are needed in a CI pipeline should be stored where?
Review Question 3
The metadata for an action is held in which file?
Review Question 4
How can the status of a workflow be shown in a repository ?
232
Answers
True or False: Self-hosted runners should be used with public repos.
Encrypted Secrets
action.yml
Using Badges
Module 9 Designing and Implementing a De-
pendency Management Strategy
Module overview
Module overview
In this module, we will talk about managing dependencies in software development. We are going to
cover what dependencies are and how to identify them in your codebase. Then you will learn how to
package these dependencies and manage the packages in package feeds. Finally, you are going to learn
about versioning strategies.
We will look at dependency management as a concept in software and why it is needed. We are going to
look at dependency management strategies and how you can identify components in your source code
and change these to dependencies.
Learning objectives
After completing this module, students will be able to:
●● Recommend artifact management tools and practices
●● Abstract common packages to enable sharing and reuse
●● Migrate and consolidate artifacts
●● Migrate and integrate source control measures
234
Packaging dependencies
What is dependency management?
Before we can understand dependency management, we will need to first get introduced to the concepts
of dependencies.
Dependencies in software
Modern software development involves complex projects and solutions. Projects have dependencies on
other projects and solutions are not single pieces of software. The solutions and software built consists of
multiple parts and components and are often reused.
As codebases are expanding and evolving it needs to be componentized to be maintainable. A team that
is writing software will not write every piece of code by itself, but leverage existing code written by other
teams or companies and open-source code that is readily available. Each component can have its own
maintainers, speed of change and distribution, giving both the creators and consumers of the compo-
nents autonomy.
A software engineer will need to identify the components that make up parts of the solution and decide
whether to write the implementation or include an existing component. The latter approach introduces a
dependency on other components.
While your codebase grows and changes, you need to consider the changes in your dependencies as
well. This requires a versioning mechanism for the dependencies so you can be selective of the
version of a dependency you want to use.
Identifying dependencies
It starts with identifying the dependencies in your codebase and deciding which dependencies will be
formalized.
Your software project and its solution probably already use dependencies. It is very common to use
libraries and frameworks that are not written by yourself. Additionally, your existing codebase might have
internal dependencies that are not treated as such. For example, take a piece of code that implements
certain business domain model. It might be included as source code in your project and consumed in
other projects and teams. You need to investigate your codebase to identify pieces of code that can be
considered dependencies to also treat them as such. This requires changes to how you organize your
code and build the solution. It will bring your components.
1 https://docs.microsoft.com/en-us/azure/devops/artifacts/collaborate-with-packages?view=vsts
236
Decomposing could also mean that you will replace your own implementation of reusable code with an
available open source or commercial component.
Package management
Packages
Packages are used to define the components you rely and depend upon in your software solution. They
provide a way to store those components in a well-defined format with metadata to describe it.
What is a package?
A package is a formalized way of creating a distributable unit of software artifacts that can be consumed
from another software solution. The package describes the content it contains and usually provides
additional metadata. This additional information uniquely identifies the individual packages and to be
self-descriptive. It helps to better store packages in centralized locations and consume the contents of
the package in a predictable manner. In addition, it enables tooling to manage the packages in the
software solution.
Types of packages
Packages can be used for a variety of components. The type of components you want to use in your
codebase differ for the different parts and layers of the solution you are creating. These range from
frontend components, such as JavaScript code files, to backend components like .NET assemblies or Java
components, complete self-contained solutions, or reusable files in general.
Over the past years the packaging formats have changed and evolved. Now there are a couple of de
facto standard formats for packages.
●● NuGet
NuGet packages (pronounced “new get”) is a standard used for .NET code artifacts. This includes .NET
assemblies and related files, tooling and sometimes only metadata. NuGet defines the way packages
are created, stored, and consumed. A NuGet package is essentially a compressed folder structure with
files in ZIP format and has the .nupkg extension.
See also An introduction to NuGet2
●● NPM
An NPM package is used for JavaScript development. It originates from node.js development where it
is the default packaging format. A NPM package is a file or folder that contains JavaScript files and a
package.json file describing the metadata of the package. For node.js the package usually contains
one or more modules that can be loaded once the package is consumed.
See also About packages and modules3
●● Maven
Maven is used for Java based projects. Each package has It has a Project Object Model file describing
the metadata of the project and is the basic unit for defining a package and working with it.
●● PyPi
The Python Package Index, abbreviated as PyPI and known as the Cheese Shop, is the official
third-party software repository for Python.
●● Docker
Docker packages are called images and contain complete and self-contained deployments of compo-
nents. Most commonly a Docker image represents a software component that can be hosted and
2 https://docs.microsoft.com/en-us/nuget/what-is-nuget
3 https://docs.npmjs.com/about-packages-and-modules
238
executed by itself, without any dependencies on other images. Docker images are layered and might
be dependent on other images as their basis. Such images are referred to as base images.
Package feeds
Packages should be stored in a centralized place for distribution and consumption by others to take
dependencies on the components it contains. The centralized storage for packages is commonly called a
package feed. There are other names in use, such as repository or registry. We will refer to all of these
as package feeds unless it is necessary to use the specific name for clarity.
Each package type has its own type of feed. Put another way, one feed typically contains one type of
packages. There are NuGet feeds, NPM feeds, Maven repositories, PyPi feed and Docker registries.
Package feeds offer versioned storage of packages. A certain package can exist in multiple versions in the
feed, catering for consumption of a specific version.
Choosing tools
The command-line nature of the tooling offers the ability to include it in scripts to automate the package
management. Ideally, one should be able to use the tooling in build and release pipelines for component
creating, publishing and consuming packages from feeds.
239
Additionally, developer tooling can have integrated support for working with package managers, provid-
ing a user interface for the raw tooling. Examples of such tooling are Visual Studio 2017, Visual Studio
Code and Eclipse.
Public
In general, you will find that publically available package sources are free to use. Sometimes they have a
licensing or payment model for consuming individual packages or the feed itself.
These public sources can also be used to store packages you have created as part of your project. It does
not have to be open source, although it is in most cases. Public and free package sources that offer feeds
at no expense will usually require that you make the packages you store publically available as well.
Private
Private feeds can be used in cases where packages should be available to a select audience.
The main difference between public and private feeds is the need for authentication. Public feeds can be
anonymously accessible and optionally authenticated. Private feeds can be accessed only when authenti-
cated.
There are two options for private feeds:
1. Self-hosting
Some of the package managers are also able to host a feed. Using on-premises or private cloud
resources one can host the required solution to offer a private feed.
2. SaaS services
A variety of third-party vendors and cloud providers offer software-as-a-service feeds that can be kept
privately. This typically requires a consumption fee or cloud subscription.
The following table contains a non-exhaustive list of self-hosting options and SaaS offerings to privately
host package feeds for each of the types covered.
240
Consuming packages
Each software project that consumes packages to include the required dependencies will need to use the
package manager and one or more packages sources. The package manager will take care of download-
ing the individual packages from the sources and install them locally on the development machine or
build server.
The developer flow will follow this general pattern:
1. Identify a required dependency in your codebase.
2. Find a component that satisfies the requirements for the project.
3. Search the package sources for a package offering a correct version of the component.
4. Install the package into the codebase and development machine.
5. Create the software implementation that uses the new components from the package.
The package manager tooling will facilitate searching and installing the components in the packages.
How this is performed varies for the different package types. Refer to the documentation of the package
manager for instructions on consuming packages from feeds.
To get started you will need to specify the package source to be used. Package managers will have a
default source defined that refers to the standard package feed for its type. Alternative feeds will need to
be configured to allow consuming the packages they offer.
Upstream sources
Part of the package management involves keeping track of the various sources. It is possible to refer to
multiple sources from a single software solution. However, when combining private and public sources,
the order of resolution of the sources becomes important.
One way to specify multiple packages sources is by choosing a primary source and specifying an up-
stream source. The package manager will evaluate the primary source first and switch to the upstream
source when the package is not found there. The upstream source might be one of the official public
sources or a private source. The upstream source could refer to another upstream source itself, creating a
chain of sources.
A typical scenario is to use a private package source referring to a public upstream source for one of the
official feeds. This effectively enhances the packages in the upstream source with packages from the
private feed, avoiding the need to publish private packages in a public feed.
A source that has an upstream source defined may download and cache the packages that were request-
ed it does not contain itself. The source will include these downloaded packages and starts to act as a
241
cache for the upstream source. It also offers the ability to keep track of any packages from the external
upstream source.
An upstream source can be a way to avoid direct access of developer and build machines to external
sources. The private feed uses the upstream source as a proxy to the otherwise external source. It will be
your feed manager and private source that have the communication to the outside. Only priviliged roles
can add upstream sources to a private feed.
See also Upstream sources4.
Packages graph
A feed can have one or more upstream sources, which might be internal or external. Each of these can
have additional upstream sources, creating a package graph of source. Such a graph can offer many
possibilities for layering and indirection of origins of packages. This might fit well with multiple teams
taking care of packages for frameworks and other base libraries.
The downside is that package graphs can become complex when not properly understood or designed. It
is important to understand how you can create a proper package graph.
See also Constructing a complete package graph5.
Azure Artifacts
Previously you learned about packaging dependencies and the various packaging formats, feeds, sources,
and package managers. Now, you will learn more about package management and how to create a feed
and publish packages to it. During this module NuGet and Azure Artifacts are used as an example of a
package format and a particular type of package feed and source.
Microsoft Azure DevOps provides various features for application lifecycle management, including work
item tracking, source code repositories, build and release pipelines and artifact management.
The artifact management is called Azure Artifacts and was previously known as Package manage-
ment. It offers public and private feeds for software packages of various types.
4 https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/upstream-sources
5 https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/package-graph
242
Publishing packages
As software is developed and components are written, you will most likely also produce components as
dependencies that can be packaged for reuse. Discussed previously was guidance to find components
that can be isolated into dependencies. These components need to be managed and packaged. After
that they can be published to a feed, allowing others to consume the packages and use the components
it contains.
Creating a feed
The first step is to create a feed where the packages can be stored. In Azure Artifacts you can create
multiple feeds, which are always private. During creation you can specify the name, visibility and whether
to prepopulate the default public upstream sources for NuGet, NPM and Python packages.
Controlling access
The Azure Artifacts feed you created is always private and not available publically. You need access to it
by authenticating to Azure Artifacts with an account that has access to Azure DevOps and a team project.
By default, a feed will be available to all registered users in Azure DevOps. You can select it to be visible
only to the team project where the feed is created. Whichever option is chosen, you can change the
permissions for a feed from the settings dialog.
Pushing packages is done with the tooling for the package manager. Each of the package managers and
tooling have different syntax for pushing.
To manually push a NuGet package you would use the NuGet.exe command-line tool. For a package
called MyDemoPackage the command would resemble this:
nuget.exe push -Source {NuGet package source URL} -ApiKey YourKey YourPack-
age\YourPackage.nupkg
Updating packages
Packages might need to be updated during their lifetime. Technically, updating a package is performed
by pushing a new version of the package to the feed. The package feed manager takes care of properly
storing the updated package amongst the existing packages in the feed.
Please note that updating packages requires a versioning strategy. This will be covered later.
6 http://microsoft.github.io/PartsUnlimited/
245
_We have published the package to the feed and is pushed succesfully._
Walkthroughs
For details on how to integrate NuGet, npm, Maven, Python, and Universal Feeds, see the following
walkthroughs:
Get started with NuGet packages in Azure DevOps Services and TFS7
Use npm to store JavaScript packages in Azure DevOps Services or TFS8
Get started with Maven packages in Azure DevOps Services and TFS9
Get started with Python packages in Azure Artifacts10
7 https://docs.microsoft.com/en-us/azure/devops/artifacts/get-started-nuget?view=vsts&tabs=new-nav
8 https://docs.microsoft.com/en-us/azure/devops/artifacts/get-started-npm?view=vsts&tabs=new-nav%2Cwindows
9 https://docs.microsoft.com/en-us/azure/devops/artifacts/get-started-maven?view=vsts&tabs=new-nav
10 https://docs.microsoft.com/en-us/azure/devops/artifacts/quickstarts/python-packages?view=vsts&tabs=new-nav
247
11 https://docs.microsoft.com/en-us/azure/devops/artifacts/quickstarts/universal-packages?view=vsts&tabs=azuredevops
248
Package security
Securing access to package feeds
Trusted sources
Package feeds are a trusted source of packages. The packages that are offered will be consumed by other
code bases and used to build software that needs to be secure. Imagine what would happen if a package
feed would offer malicious components in its packages. Each consumer would be affected when installing
the packages onto its development machine or build server. This also happens at any other device that
will run the end product, as the malicious components will be executed as part of the code. Usually, the
code runs with high priviliges, giving a substantial security risk if any of the packages cannot be trusted
and might contain unsafe code.
Securing access
Therefore, it is essential that package feeds are secured for access by authorized accounts, so only
verified and trusted packages are stored there. Noone should be able to push packages to a feed without
the proper role and permissions. This prevents others from pushing malicious packages. It still assumes
that the persons who can push packages will only add safe and secure packages. Especially in the
open-source world this is performed by the community. A package source can further guard its feed with
the use of security and vulnerability scan tooling. Additionally, consumers of packages can use similar
tooling to perform the scans themselves.
Securing availability
Another aspect of security for package feeds is about public or private availability of the packages. The
feeds of public sources are usually available for anonymous consumption. Private feeds on the other
hand have restricted access most of the time. This applies to consumption and publishing of packages.
Private feeds will allow only users in specific roles or teams access to its packages.
Package feeds need to have secure access for a variety of reasons. The access should involve allowing:
●● Restricted access for consumption
Whenever a package feed and its packages should only be consumed by a certain audience, it is
required to restrict access to it. Only those allowed access will be able to consume the packages from
the feed.
●● Restricted access for publishing
Secure access is required to restrict who can publish so feeds and their packages cannot be modified
by unauthorized or untrusted persons and accounts.
Roles
Azure Artifacts has four different roles for package feeds. These are incremental in the permissions they
give.
The roles are in incremental order:
●● Reader: Can list and restore (or install) packages from the feed
●● Collaborator: Can save packages from upstream sources
●● Contributor: Can push and unlist packages in the feed
249
Permissions
The feeds in Azure Artifacts require permission to the various features it offers. The list of permissions
consists of increasing priviliged operations.
The list of privileges is as follows:
For each permission you can assign users, teams, and groups to a specific role, giving the permissions
corresponding to that role. You need to have the Owner role to be able to do so. Once an account has
access to the feed from the permission to list and restore packages it is considered a Feed user.
250
Just like permissions and roles for the feed itself, there are additional permissions for access to the
individual views. Any feed user has access to all the views, whether the default views of @Local, @Release
or @Prerelease, or newly created ones. During creation of a feed, you can choose whether the feed is visi-
ble to people in your Azure DevOps organization or only specific people.
See also:
Secure and share packages using feed permissions12
Authentication
Azure DevOps users will authenticate against Azure Active Directory when accessing the Azure DevOps
portal. After being successfully authenticated, they will not have to provide any credentials to Azure
Artifacts itself. The roles for the user, based on its identity, or team and group membership, are for
authorization. When access is allowed, the user can simply navigate to the Azure Artifacts section of the
team project.
The authentication from Azure Pipelines to Azure Artifacts feeds is taken care of transparently. It will be
based upon the roles and its permissions for the build identity. The previous section on Roles covered
some details on the required roles for the build identity.
The authentication from inside Azure DevOps does not need any credentials for accessing feeds by itself.
However, when accessing secured feeds outside Azure Artifacts, such as other package sources, you most
12 https://docs.microsoft.com/en-us/azure/devops/artifacts/feeds/feed-permissions
251
likely must provide credentials to authenticate to the feed manager. Each package type has its own way
of handling the credentials and providing access upon authentication. The command-line tooling will
provide support in the authentication process.
For the build tasks in Azure Pipelines, you will provide the credentials via a Service connection.
252
Immutable packages
As packages get new versions, your codebase can choose when to use a new version of the packages it
consumes. It does so by specifying the specific version of the package it requires. This implies that
packages themselves should always have a new version when they change. Whenever a package is
published to a feed it should not be allowed to change any more. If it were, it would be at the risk of
introducing potential breaking changes to the code. In essence, a published package is immutable.
Replacing or updating an existing version of a package is not allowed. Most of the package feeds do not
allow operations that would change an existing version. Regardless of the size of the change a package
can only be updated by the introduction of a new version. The new version should indicate the type of
change and impact it might have.
See also Key concepts for Azure Artifacts13.
Versioning of artifacts
It is proper software development practice to indicate changes to code with the introduction of an
increased version number. However small or large a change, it requires a new version. A component and
its package can have independent versions and versioning schemes.
The versioning scheme can differ per package type. Typically, it uses a scheme that can indicate the type
of change that is made. Most commonly this involves three types of changes:
●● Major change
Major indicates that the package and its contents have changed significantly. It often occurs at the
introduction of a completely new version of the package. This can be at a redesign of the component.
Major changes are not guaranteed to be compatible and usually have breaking changes from older
versions. Major changes might require a substantial amount of work to adopt the consuming code-
base to the new version.
●● Minor change
Minor indicates that the package and its contents have substantial changes made but are a smaller
13 https://docs.microsoft.com/en-us/azure/devops/artifacts/artifacts-key-concepts#immutability
253
increment than a major change. These changes can be backward compatible from the previous
version, although they are not guaranteed to be.
●● Patch
A patch or revision is used to indicate that a flaw, bug, or malfunctioning part of the component has
been fixed. Normally, this is a backward compatible version compared to the previous version.
How artifacts are versioned technically varies per package type. Each type has its own way of indicating
the version in metadata. The corresponding package manager can inspect the version information. The
tooling can query the package feed for packages and the available versions.
Additionally, a package type might have its own conventions for versioning as well as a particular version-
ing scheme.
See also Publish to NuGet feeds14
Semantic versioning
One of the predominant ways of versioning is the use of semantic versionsing. It is not a standard per se
but does offer a consistent way of expressing intent and semantics of a certain version. It describes a
version in terms of its backward compatibility to previous versions.
Semantic versioning uses a three-part version number and an additional label. The version has the form
of Major.Minor.Patch, corresponding to the three types of changes covered in the previous section.
Examples of versions using the semantic versioning scheme are 1.0.0 and 3.7.129. These versions do
not have any labels.
For prerelease versions it is customary to use a label after the regular version number. A label is a textual
suffix separated by a hyphen from the rest of the version number. The label itself can be any text describ-
ing the nature of the prerelease. Examples of these are rc1, beta27 and alpha, forming version
numbers like 1.0.0-rc1 as a prerelease for the upcoming 1.0.0 version.
Prereleases are a common way to prepare for the release of the label-less version of the package. Early
adopters can take a dependency on a prerelease version to build using the new package. In general, it is
not a good idea to use prerelease version of packages and their components for released software.
It is good to anticipate on the impact of the new components by creating a separate branch in the
codebase and use the prerelease version of the package. Changes are that there will be incompatible
changes from a prerelease to the final version.
See also Semantic Versioning 2.0.015.
Release views
When building packages from a pipeline, the package needs to have a version before the package can be
consumed and tested. Only after testing is the quality of the package known. Since package versions
cannot and should not be changed, it becomes challenging to choose a certain version beforehand.
Azure Artifacts recognizes a quality level of packages in its feeds and the difference between prerelease
and release versions. It offers different views on the list of packages and their versions, separating these
based on their quality level. It fits well with the use of semantic versioning of the packages for predictabil-
ity of the intent of a particular version but is additional metadata from the Azure Artifacts feed called a
descriptor.
14 https://docs.microsoft.com/en-us/azure/devops/pipelines/artifacts/nuget#package-versioning
15 https://semver.org/
254
Feeds in Azure Artifacts have three different views by default. These views are added when a new feed is
created. The three views are:
●● Release
The @Release view contains all packages that are considered official releases.
●● Prerelease
The @Prerelease view contains all packages that have a label in their version number.
●● Local
The @Local view contains all release and prerelease packages as well as the packages downloaded
from upstream sources.
Using views
You can use views to offer help consumers of a package feed to filter between released and unreleased
versions of packages. Essentially, it allows a consumer to make a conscious decision to choose from
released packages, or opt-in to prereleases of a certain quality level.
By default, the @Local view is used to offer the list of available packages. The format for this URI is:
https://pkgs.dev.azure.com/{yourteamproject}/_packaging/{feedname}/nuget/
v3/index.json
When consuming a package feed by its URI endpoint, the address can have the requested view included.
For a specific view, the URI includes the name of the view, which changes to be:
https://pkgs.dev.azure.com/{yourteamproject}/_packaging/{feedname}@
{Viewname}/nuget/v3/index.json
The tooling will show and use the packages from the specified view automatically.
Tooling may offer an option to select prerelease versions, such as shown in this Visual Studio 2017 NuGet
dialog. This does not relate or refer to the @Prerelease view of a feed. Instead, it relies on the presence
of prerelease labels of semantic versioning to include or exclude packages in the search results.
See also:
●● Views on Azure DevOps Services feeds16
●● Communicate package quality with prerelease and release views17
Promoting packages
Azure Artifacts has the notion of promoting packages to views to indicate that a version is of a certain
quality level. By selectively promoting packages you can plan when packages have a certain quality and
are ready to be released and supported by the consumers.
You can promote packages to one of the available views as the quality indicator. The two views Release
and Prerelease might be sufficient, but you can create more views when you want finer grained quality
levels if necessary, such as alpha and beta.
Packages will always show in the Local view, but only in a particular view after being promoted to it.
Depending on the URL used to connect to the feed, the available packages will be listed.
16 https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views
17 https://docs.microsoft.com/en-us/azure/devops/artifacts/feeds/views
255
Upstream sources will only be evaluated when using the @Local view of the feed. Views Afer they have
been downloaded and cached in the @Local view, you can see and resolve the packages in other views
after they have promoted to it.
It is up to you to decide how and when to promote packages to a specific view. This process can be
automated by using an Azure Pipelines task as part of the build pipeline.
Packages that have been promoted to a view will not be deleted based on the retention policies.
●● dotnet restore
●● dotnet build
●● dotnet push
18 https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/best-practices
257
It shows the feed already contains the PartsUnlimited.Security 1.0.0. We go back to the Visual Studio
project to see what is happening.
4. Open the source code for the PartsUnlimited package in Visual Studio in a separate solution.
Lab
Lab 09: Package management with Azure Arti-
facts
Lab overview
Azure Artifacts facilitate discovery, installation, and publishing NuGet, npm, and Maven packages in Azure
DevOps. It's deeply integrated with other Azure DevOps features such as Build, making package manage-
ment a seamless part of your existing workflows.
In this lab, you will learn how to work with Azure Artifacts by using the following steps:
●● create and connect to a feed.
●● create and publish a NuGet package.
●● import a NuGet package.
●● update a NuGet package.
Objectives
After you complete this lab, you will be able to:
●● Create an Azure Active Directory (Azure AD) service principal.
●● Create an Azure key vault.
●● Track pull requests through the Azure DevOps pipeline.
Lab duration
●● Estimated time: 40 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions19
19 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
259
Review Question 2
Can you create a package feed for Maven in Azure Artifacts?
Yes
No
Review Question 3
What type of package should you use for Machine learning training data and models?
NuGet
NPM
Maven
Universal
Python
Review Question 4
If an existing package is found to be broken or buggy, how should it be fixed?
Review Question 5
What is meant by saying that a package should be immutable?
260
Answers
Review Question 1
If you are creating a feed that will allow yourself and those that you invite to publish, what visibility
should you choose?
public
■■ private
Review Question 2
Can you create a package feed for Maven in Azure Artifacts?
■■ Yes
No
Review Question 3
What type of package should you use for Machine learning training data and models?
NuGet
NPM
Maven
■■ Universal
Python
If an existing package is found to be broken or buggy, how should it be fixed?
Module overview
Module overview
Welcome to this module about designing a release strategy. In this module, we will talk about Continu-
ous Delivery in general. In this introduction, we will cover the basics. I'll explain the concepts of Continu-
ous Delivery, Continuous Integration, and Continuous Deployment and the relation to DevOps, and we
will discuss why you would you need Continuous Delivery and Continuous Deployment. After that, we will
talk about releases and deployments and the differences between those two.
Once we have covered these general topics, we will talk about release strategies and artifact sources, and
walk through some considerations when choosing and defining those. We will also discuss the considera-
tions for setting up deployment stages and your delivery and deployment cadence, and lastly about
setting up your release approvals.
After that, we will cover some ground to create a high-quality release pipeline and talk about the quality
of your release process and the quality of a release and difference between those two. We will look at
how to visualize your release process quality and how to control your release using release gates as a
mechanism. Finally, we will look at how to deal with release notes and documentation.
Finally, we look at choosing the right release management tool. There are a lot of tools out there. We will
cover the components that you need to look at if you are going to choose the right release management
tool product or company.
Learning objectives
At the end of this module, students will be able to:
●● Differentiate between a release and a deployment
●● Define the components of a release pipeline
●● Explain things to consider when designing your release strategy
●● Classify a release versus a release process, and outline how to control the quality of both
●● Describe the principle of release gates and how to deal with release notes and documentation
262
Silo-based development
Long release cycles, a lot of testing, code freezes, night and weekend work and a lot of people involved,
ensure that everything works. But the more we change, the more risk it entails, and we are back at the
beginning. On many occasions resulting in yet another document or process that should be followed.
This is what I call silo-based development.
If we look at this picture of a traditional, silo-based value stream, we see Bugs and Unplanned work,
necessary updates or support work and planned (value adding) work, all added to the backlog of the
teams. When everything is planned and the first “gate” can be opened, everything drops to the next
phase. All the work, and thus all the value moves in piles to the next phase. It moves from Plan phase to a
Realize phase where all the work is developed, tested, and documented, and from here, it moves to the
release phase. All the value is released at the same time. As a result, the release takes a long time.
264
We need to move towards a situation where the value is not piled up and released all at once, but where
value flows through a pipeline. Just like in the picture, a piece of work is a marble. And only one piece of
work can flow through the pipeline at once. So, work must be prioritized in the right way. As you can see
the pipeline has green and red outlets. These are the feedback loops or quality gates that we want to
have in place.
A feedback loop can be different things:
●● A unit test to validate the code
●● An automated build to validate the sources
●● An automated test on a Test environment
●● Some monitor on a server
●● Usage instrumentation in the code
If one of the feedback loops is red, the marble cannot pass the outlet and it will end up in the Monitor
and Learn tray. This is where the learning happens. The problem is analyzed and solved so that the next
time a marble passes the outlet, it is green.
265
Every single piece of workflows through the pipeline until it ends up in the tray of value. The more that is
automated the faster value flows through the pipeline.
Companies want to move toward Continuous Delivery. They see the value. They hear their customers.
Companies want to deliver their products as fast as possible. Quality should be higher. The move to
production should be faster. Technical Debt should be lower.
A great way to improve your software development practices was the introduction of Agile and Scrum.
Last year around 80% of all companies claimed that they adopted Scrum as a software development
practice. By using Scrum, many teams can produce a working piece of software after a sprint of maybe 2
or 3 weeks. But producing working software is not the same as delivering working software. The result is
that all “done” increments are waiting to be delivered in the next release, which is coming in a few
months.
What we see now, is that Agile teams within a non-agile company are stuck in a delivery funnel. The
bottleneck is no longer the production of working software, but the problem has become the delivery of
working software. The finished product is waiting to be delivered to the customers to get business value,
but this does not happen. Continuous Delivery needs to solve this problem.
1 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/releases?view=vsts
2 https://docs.microsoft.com/en-us/azure/devops/articles/phase-features-with-feature-flags?view=vsts
267
If the system is stable and operates the same as it did before, we can decide to flip a switch. This might
reveal one or more features to the end user or change a set of routines that are part of the system.
The whole idea of separating deployment from release (exposing features with a switch) is compelling
and something we want to incorporate in our Continuous Delivery practice. It helps us with more stable
releases and better ways to roll back when we run into issues when we have a new feature that produces
problems.
We switch it off again and then create a hotfix. By separating deployment from the release of a feature,
you create the opportunity to deploy any time of the day, since the new software will not affect the
system that already works.
●● The Organization
●● Application Architecture
●● Skills
●● Tooling
●● Tests
●● other things?
268
The components that make up the release pipeline or process are used to create a release. There is a
difference between a release and the release pipeline or process.
The release pipeline is the blueprint through which releases are done. We will cover more of this when
discussing the quality of releases and releases processes.
See also Release pipelines3.
Artifact sources
What is an artifact? An artifact is a deployable component of your application. These components can
then be deployed to one or more environments. In general, the idea about build and release pipelines
and Continuous Delivery is to build once and deploy many times. This means that an artifact will be
deployed to multiple environments. To achieve this, this implies that the artifact is a stable package. The
only thing that you want to change when you deploy an artifact to a new environment is the configura-
tion. The contents of the package should never change. This is what we call immutability4. We should be
100% sure that the package that what we build, the artifact, remains unchanged.
How do we get an artifact? There are different ways to create and retrieve artifacts, and not every method
is appropriate for every situation.
The most common and most used way to get an artifact within the release pipeline is to use a build
artifact. The build pipeline compiles, tests, and eventually produces an immutable package, which is
stored in a secure place (storage account, database etc.).
The release pipeline then uses a secure connection to this secured place to get the build artifact and
perform additional actions to deploy this to an environment. The big advantage of using a build artifact is
that the build produces a versioned artifact. The artifact is linked to the build and gives us automatic
traceability. We can always find the sources that produced this artifact.
3 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/?view=vsts
4 https://docs.microsoft.com/en-us/azure/devops/artifacts/artifacts-key-concepts?view=vsts
270
Another possible artifact source is version control. We can directly link our version control to our release
pipeline. The release is then related to a specific commit in our version control system. With that, we can
also see which version of a file or script is eventually installed. In this case, the version does not come
from the build, but from version control. A consideration for choosing a version control artifact instead of
a build artifact can be that you only want to deploy one specific file. If no additional actions are required
before this file is used in the release pipeline, it does not make sense to create a versioned package
containing one that file. Helper scripts that perform actions to support the release process (clean up,
rename, string actions) are typically good candidates to get from version control.
Another possibility of an artifact source can be a network share containing a set of files. However, you
should be aware of the possible risk. The risk is that you are not 100% sure that the package that you are
going to deploy is the same package that was put on the network share. If other people can access the
network share as well, the package might be compromised. For that reason, this option will not be
sufficient to prove integrity in a regulated environment (banks, insurance companies).
Finally, container registries are a rising star when it comes to artifact sources. Container registries are
versioned repositories where container artifacts are stored. By pushing a versioned container to the
content repository, and consuming that same version within the release pipeline, it has more or less the
same advantages as using a build artifact stored in a safe location.
5 https://semver.org/
271
You can also point at a disk or network share, but this implies some risk concerning auditability and
immutability. Can you ensure the package never changed?
See also Release artifacts and artifact sources6.
Steps
Let's look at how to work with one or more artifact sources in the release pipeline.
1. In the Azure DevOps environment, open the Parts Unlimited project, then from the main menu, click
Pipelines, then click Releases.
6 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/artifacts?view=vsts
272
3. In the Select a template pane, note the available templates, but then click the Empty job option at
the top. This is because we are going to focus on selecting an artifact source.
4. In the Artifacts section, click +Add an artifact.
5. Note the available options in the Add an artifact pane, and click the option to see more artifact
types, so that you can see all the available artifact types:
While we're in this section, let's briefly look at the available options.
6. Click Build and note the parameters required. This option is used to retrieve artifacts from an Azure
DevOps Build pipeline. Using it requires a project name, and a build pipeline name. (Note that
projects can have multiple build pipelines). This is the option that we will use shortly.
273
7. Click Azure Repository and note the parameters required. It requires a project name and asks you to
select the source repository.
8. Click GitHub and note the parameters required. The Service is a connection to the GitHub repository.
It can be authorized by either OAuth or by using a GitHub personal access token. You also need to
select the source repository.
9. Click TFVC and note the parameters required. It also requires a project name and asks you to select
the source repository.
Note: A release pipeline can have more than one set of artifacts as input. A common example is a situation
where as well as your project source, you also need to consume a package from a feed.
10. Click Azure Artifacts and note the parameters required. It requires you to identify the feed, package
type, and package.
274
11. Click GitHub Release and note the parameters required. It requires a service connection and the
source repository.
13. Click Docker Hub and note the parameters required. This option would be useful if your containers
are stored in Docker Hub rather than in an Azure Container Registry. After choosing a secure service
connection, you need to select the namespace and the repository.
275
14. Finally, click Jenkins and note the parameters required. You do not need to get all your artifacts from
Azure. You can retrieve them from a Jenkins build. So, if you have a Jenkins Server in your infrastruc-
ture, you can use the build artifacts from there, directly in your Azure DevOps pipelines.
We have now added the artifacts that we will need for later walkthroughs.
16. To save the work, click Save, then in the Save dialog box, click OK.
As we mentioned in the introduction, Continuous Delivery is not only about deploying multiple times a
day, but also about being able to deploy on demand. When we define our cadence, questions that we
should ask ourselves are:
●● Do we want to deploy our application?
●● Do we want to deploy multiple times a day?
●● Can we deploy to a stage? Is it used?
For example, when a tester is testing an application during the day might not want to deploy a new
version of the app during the test phase.
Another example, when your application incurs downtime, you do not want to deploy when users are
using the application.
The frequency of deployment, or cadence, differs from stage to stage. A typical scenario that we often
see is that continuous deployment happens to the development stage. Every new change ends up there
once it is completed and builds. Deploying to the next phase does not always occur multiple times a day
but only during the night.
When you are designing your release strategy, choose your triggers carefully and think about the re-
quired release cadence.
Some things we need to take into consideration are:
●● What is your target environment?
●● Is it used by one team or is it used by multiple teams?
●● If a single team uses it, you can deploy frequently. Otherwise, you need to be a bit more careful.
●● Who are the users? Do they want a new version multiple times a day?
●● How long does it take to deploy?
●● Is there downtime? What happens to performance? Are users impacted?
Steps
Let's now look at the other section in the release pipeline that we have created: Stages.
1. Click on Stage 1 and in the Stage properties pane, set Stage name to Development and close the
pane.
278
Note: stages can be based on templates. For example, you might be deploying a web application using
node.js or Python. For this walkthrough, that won't matter because we are just focussing on defining a
strategy.
2. To add a second stage, click +Add in the Stages section and note the available options. You have a
choice to create a new stage, or to clone an existing stage. Cloning a stage can be very helpful in
minimizing the number of parameters that need to be configured. But for now, just click New stage.
3. When the Select a template pane appears, scroll down to see the available templates. For now, we
don't need any of these, so just click Empty job at the top, then in the Stage properties pane, set
Stage name to Test, then close the pane.
279
4. Hover over the Test stage and notice that two icons appear below. These are the same options that
were available in the menu drop down that we used before. Click the Clone icon to clone the stage to
a new stage.
5. Click on the Copy of Test stage and in the stage properties pane, set Stage name to Production and
close the pane.
We have now defined a very traditional deployment strategy. Each of the stages contains a set of tasks,
and we will look at those tasks later in the course.
*Note: The same artifact sources move through each of the stages.
The lightning bolt icon on each stage shows that we can set a trigger as a predeployment condition. The
person icon on both ends of a stage, show that we can have pre- and post-deployment approvers.
280
Concurrent stages
You'll notice that at present we have all the stages one after each other in a sequence. It is also possible
to have concurrent stages. Let's see an example.
6. Click the Test stage, and on the stage properties pane, set Stage name to Test Team A and close the
pane.
7. Hover over the Test Team A stage and click the Clone icon that appears, to create a new cloned
stage.
8. Click the Copy of Test Team A stage, and on the stage properties pane, set Stage name to Test
Team B and close the pane.
9. Click the Pre-deployment conditions icon (i.e., the lightning bolt) on Test Team B to open the
pre-deployment settings.
10. In the Pre-deployment conditions pane, note that the stage can be triggered in three different ways:
The stage can immediately follow Release. (That is how the Development stage is currently configured). It
can require manual triggering. Or, more commonly, it can follow another stage. At present, it is following
Test Team A but that's not what we want.
11. From the Stages drop down list, chose Development and uncheck Test Team A, then close the pane.
We now have two concurrent Test stages.
281
Azure DevOps pipelines are very configurable and support a wide variety of deployment strategies. The
name Stages is a better fit than Environment even though the stages can be used for environments.
For now, let's give the pipeline a better name and save the work.
12. At the top of the screen, hover over the New release pipeline name and when a pencil appears, click
it to edit the name. Type Release to all environments as the name and hit enter or click elsewhere on
the screen.
282
13. For now, save the environment-based release pipeline that you have created by clicking Save, then in
the Save dialog box, click OK.
Scheduled triggers
This speaks for itself, but what it allows you to, is to set up time-based manner to start a new release. For
example, every night at 3:00 AM or at 12:00 PM. You can have one or multiple schedules per day, but it
will always run on this specific time.
Manual trigger
With a manual trigger, a person or system triggers the release based on a specific event. When it is a
person, it probably uses some UI to start a new release. When it is an automated process most likely,
some event will occur, and by using the automation engine, which is usually part of the release manage-
ment tool, you can trigger the release from another system.
For more information, see also:
●● Release triggers7
●● Stage Triggers8
Steps
Let's now look at when our release pipeline is used to create deployments. Mostly, this will involve the
use of triggers.
7 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/triggers?view=vsts
8 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/triggers?view=vsts#env-triggers
283
When we refer to a deployment, we are referring to each individual stage, and each stage can have its
own set of triggers that determine when the deployment occurs.
1. Click the lightning bolt on the _Parts Unlimited-ASP.NET-CI artifact.
2. In the Continuous deployment trigger pane, click the Disabled option to enable continuous deploy-
ment. It will then say Enabled.
Once this is selected, every time that a build completes, a deployment of the release pipeline will start.
✔️ Note: You can filter which branches affect this, so for example you could choose the master branch or
a particular feature branch.
Scheduled deployments
You might not want to have a deployment commence every time a build completes. That might be very
disruptive to testers downstream if it was happening too often. Instead, it might make sense to set up a
deployment schedule.
3. Click on the Scheduled release trigger icon to open its settings.
284
4. In the Scheduled release trigger pane, click the Disabled option to enable scheduled release. It will
then say Enabled and additional options will appear.
You can see in the screenshot above that a deployment using the release pipeline would now occur each
weekday at 3AM. This might be convenient when you for example, share a stage with testers who work
during the day. You don't want to constantly deploy new versions to that stage while they're working.
This setting would create a clean fresh environment for them at 3AM each weekday.
✔️ Note: The default timezone is UTC. You can change this to suit your local timezone as this might be
easier to work with when creating schedules.
5. For now, we don't need a scheduled deployment, so click the Enabled button again to disable the
scheduled release trigger and close the pane.
Pre-deployment triggers
6. Click the lightning bolt on the Development stage to open the pre-deployment conditions.
285
✔️ Note: Both artifact filters and a schedule can be set at the pre-deployment for each stage rather than
just at the artifact configuration level.
Deployment to any stage doesn't happen automatically unless you have chosen to allow that.
Release approvals
As we have described in the introduction, Continuous Delivery is all about delivering on demand. But, as
we discussed in the differences between release and deployment, delivery, or deployment, is only the
technical part of the Continuous Delivery process. It is all about how you are technically able to install the
software on an environment, but it does not say anything about the process that needs to be in place for
a release.
Release approvals are not to control how, but control if you want to deliver multiple times a day.
Manual approvals also suit a significant need. Organizations that start with Continous Delivery often lack
a certain amount of trust. They do not dare to release without a manual approval. After a while, when
they find that the approval does not add any value and the release always succeeds, the manual approval
is often replaced by an automatic check.
Things to consider when you are setting up a release approval are:
●● What do we want to achieve with the approval?
Is it an approval that we need for compliance reasons? For example. We need to adhere to the
286
four-eyes principal to get out SOX compliance. Or Is it an approval that we need to manage our
dependencies. Or is it an approval that needs to be in place purely because we need a sign off from
an authority like Security Officers or Product Owners.
●● Who needs to approve?
We need to know who needs to approve the release. Is it a product owner, Security officer, or just
someone that is not the one that wrote the code? This is important because the approver is part of
the process. He is the one that can delay the process if not available. So be aware that.
●● When do you want to approve?
Another essential thing to consider is when to approve. This is a direct relationship with what happens
after approval. Can you continue without approval? Or is everything on hold until approval is given. By
using scheduled deployments, you can separate approval from deployment.
Although manual approval is a great mechanism to control the release, it is not always useful. On many
occasions, the check can be done in an earlier stage. For example, approving a change that has been
made in Source Control.
Scheduled deployments already solve the dependency issue. You do not have to wait for a person in the
middle of the night. But there is still a manual action involved. If you want to eliminate manual activities
altogether, but still want to have control you start talking about automatic approvals or release gates.
●● Release approvals and gates overview9
●● Release Approvals10
Steps
Let's now look at when our release pipeline needs manual approval before deployment of a stage starts,
or manual approval that the deployment of a stage completed as expected.
While DevOps is all about automation, manual approvals are still very useful. There are many scenarios
where they are needed. For example, a product owner might want to sign off a release before it moves to
production. Or the scrum team wans to make sure that no new software is deployed to the test environ-
ment before someone signs off on it, because they might need to find an appropriate time if it's con-
stantly in use.
This can help to gain trust in the DevOps processes within the business.
Even if the process will later be automated, people might still want to have a level of manual control until
they become comfortable with the processes. Explicit manual approvals can be a great way to achieve
that.
Let's try one.
1. Click the pre-deployment conditions icon for the Development stage to open the settings.
9 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/approvals?view=vsts
10 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/approvals?view=vsts
287
2. Click the Disabled button in the Pre-deployment approvals section to enable it.
3. In the Approvers list, find your own name and select it. Then set the Timeout to 1 Days.
Note: Approvers is a list, not just a single value. If you add more than one person in the list, you can also
choose if they need to approve in sequence, or if either or both approvals are needed.
4. Take note of the approver policy options that are available:
It is very common to not allow a user who requests a release or deployment to also approve it. In this
case, we are the only approver so we will leave that unchecked.
5. Close the Pre-deployment conditions pane and notice that a checkmark has appeared beside the
person in the icon.
288
8. In the Create a new release pane, note the available options, then click Create.
289
9. In the upper left of the screen, you can see that a release has been created.
10. At this point, an email should have been received, indicating that an approval is required.
290
At this point, you could just click the link in the email, but instead, we'll navigate within Azure DevOps to
see what's needed.
11. Click on the Release 1 Created link (or whatever number it is for you) in the area we looked at in Step
9 above. We are then taken to a screen that shows the status of the release.
291
You can see that a release has been manually triggered and that the Development stage is waiting for an
approval. As an approver, you can now perform that approval.
12. Hover over the Development stage and click the Approve icon that appears.
Note: Options to cancel the deployment or to view the logs are also provided at this point
13. In the Development approvals window, add a comment and click Approve.
The deployment stage will then continue. Watch as each stage proceeds and succeeds.
292
Release gates
Release gates give you additional control over the start and completion of the deployment pipeline. They
are often set up as a pre-deployment and post-deployment conditions.
In many organizations, there are so-called dependency meetings. This is a planning session where the
release schedule of dependent components is discussed. Think of downtime of a database server or an
update of an API. This takes a lot of time an effort, and the only thing that is needed is a signal if the
release can proceed. Instead of having this meeting you can also create a mechanism where people press
a button on a form when the release cannot advance. When the release starts, it checks the state of the
gate by calling an API. If the “gate” is open, we can continue. Otherwise, we stop the release.
By using scripts and API's, you can create your own release gates instead of a manual approval. Or at
least extending your manual approval. Other scenarios for automatic approvals are for example.
●● Incident and issues management. Ensure the required status for work items, incidents, and issues. For
example, ensure that deployment only occurs if no bugs exist.
●● Notify users such as legal approval departments, auditors, or IT managers about a deployment by
integrating with approval collaboration systems such as Microsoft Teams or Slack and waiting for the
approval to complete.
●● Quality validation. Query metrics from tests on the build artifacts such as pass rate or code coverage
and only deploy if they are within required thresholds.
●● Security scan on artifacts. Ensure security scans such as anti-virus checking, code signing, and policy
checking for build artifacts have completed. A gate might initiate the scan and wait for it to complete
or check for completion.
●● User experience relative to baseline. Using product telemetry, ensure the user experience hasn't
regressed from the baseline state. The experience level before the deployment could be considered a
baseline.
●● Change management. Wait for change management procedures in a system such as ServiceNow com-
plete before the deployment occurs.
●● Infrastructure health. Execute monitoring and validate the infrastructure against compliance rules after
deployment or wait for proper resource utilisation and a positive security report.
293
In short, approvals and gates give you additional control over the start and completion of the deploy-
ment pipeline. They can usually be set up as a pre-deployment and post-deployment condition, that can
include waiting for users to approve or reject deployments manually and checking with other automated
systems until specific requirements are verified. In addition, you can configure a manual intervention to
pause the deployment pipeline and prompt users to carry out manual tasks, then resume or reject the
deployment.
To find out more about Release Approvals and Gates, check these documents.
●● Release approvals and gates overview11
●● Release Approvals12
●● Release Gates13
Steps
Let's now look at when our release pipeline needs to perform automated checks for issues like code
quality, before continuing with the deployments. That automated approval phase is achieved by using
Release Gates.
First we need to make sure that the Release Gates can execute work item queries.
1. On the Boards > Queries page, click All to see all the queries (not just favorites).
2. Click the ellipsis for Shared Queries and choose Security.
3. Add a user ProjectName Build Service (CompanyName) if they are not already present, and choose
Allow for Read permissions.
Now let's look at configuring a release gate.
1. Click the lightning icon on the Development stage to open the pre-deployment conditions settings.
2. In the Pre-deployment conditions pane, click the Disabled button beside Gates to enable them.
11 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/approvals?view=vsts
12 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/approvals?view=vsts
13 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=vsts
294
3. Click +Add to see the available types of gates, then click Query work items.
We will use the Query work items gate to check if there are any outstanding bugs that need to be dealt
with. It does this by running a work item query. This is an example of what is commonly called a Quality
Gate.
4. Set Display name to No critical bugs allowed, and from the Query drop down list, choose Critical
Bugs. Leave the Upper threshold set to zero because we don't want to allow any bugs at all.
295
5. Click the drop down beside Evaluation options to see what can be configured. While 15 minutes is a
reasonable value in production, for our testing, change The time between re-evaluation of gates to
5 Minutes.
The release gate doesn't just fail or pass a single time. It can keep evaluating the status of the gate. It
might fail the first time, but after re-evaluation, it might then pass if the underlying issue has been
corrected.
6. Close the pane and click Save and OK to save the work.
7. Click Create release to start a new release, and in the Create a new release pane, click Create.
9. If it is waiting for approval, click Approve to allow it to continue, and in the Development pane, click
Approve.
After a short while, you should see the release continuing and then entering the phase where it will
process the gates.
10. In the Development pane, click Gates to see the status of the release gates.
You will notice that the gate failed the first time it was checked. In fact, it will be stuck in the processing
gates stage, as there is a critical bug. Let's look at that bug and resolve it.
11. Close the pane and click Save then OK to save the work.
297
13. In the Queries window, click All to see all the available queries.
You will see that there is one critical bug that needs to be resolved.
15. In the properties pane for the bug, change the State to Done, then click Save.
298
Note that there are now no critical bugs that will stop the release.
17. Return to the release by clicking Pipelines then Releases in the main menu, then clicking the name of
the latest release.
18. When the release gate is checked next time, the release should continue and complete successfully.
299
Clean up
To avoid excessive wait time in later walkthroughs, we'll disable the release gates.
19. In the main menu, click Pipelines, then click Releases, then click Edit to open the release pipeline
editor.
20. Click the Pre-deployment conditions icon (i.e., the lightning bolt) on the Development task, and in
the Pre-deployment conditions pane, click the switch beside Gates to disable release gates.
21. Click Save, then click OK.
300
The release also has a quality aspect, but this is tightly related to the quality of the actual deployment
and the package that has been deployed.
When we want to measure the quality of a release itself, we can perform all kinds of checks within the
pipeline. Of course, you can execute all different types of tests like integration tests, load tests or even
you UI tests while running your pipeline and check the quality of the release that you are deploying.
Using a quality gate is also a perfect way to check the quality of your release. There are many different
quality gates. For example, a gate that monitors to check if everything is healthy on your deployment
targets, work item gates that verify the quality of your requirements process. You can add additional
security and compliance checks. For example, do we comply with the 4-eyes principle, or do we have the
proper traceability?
●● Compliance checks
Document store
An often-used way of storing release notes is by creating text files, or documents in some document
store. This way, the release notes are stored together with other documents. The downside of this
approach is that there is no direct connection between the release in the release management tool and
the release notes that belong to this release.
Wiki
The most used way that is used at customers is to store the release notes in a Wiki. For example, Conflu-
ence from Atlassian, SharePoint Wiki, SlimWiki or the Wiki in Azure DevOps.
The release notes are created as a page in the wiki, and by using hyperlinks, relations can be associated
with the build, the release, and the artifacts.
In a work item
Another option is to store your release notes as part of your work items. Work items can be Bugs, Tasks,
Product Backlog Items or User Stories. To save release notes in work items, you can create or use a
separate field within the work item. In this field, you type the publicly available release notes that will be
communicated to the customer. With a script or specific task in your build and release pipeline, you can
then generate the release notes and store them as an artifact or publish them to an internal or external
website.
303
14 https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes
15 https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks
16 https://www.atlassian.com/software/confluence
17 https://azure.microsoft.com/en-us/services/devops/wiki/
304
Stages
Running a Continuous Integration pipeline that build and deploys your product is a very commonly used
scenario.
But what if you want to deploy the same release to different environments? When choosing the right
release management tool, you should consider the following things when it comes to stages (or environ-
ments)
●● Can you use the same artifact to deploy to different stages?
●● Can you differ the configuration between the stages?
●● Can you have different steps for each stage?
●● Can you follow the release between the stages?
●● Can you track the artifacts / work items and source code between the stages?
●● Traceability
●● Can we see where the released software originates from (which code)?
●● Can we see the requirements that led to this change?
●● Can we follow the requirements through the code, build and release?
●● Auditability
●● Can we see who, when and why the release process changed?
●● Can we see who, when and why a new release has been deployed?
Security is vital in this. When people can do everything, including deleting evidence, this is not ok. Setting
up the right roles, permissions and authorisation are important to protect your system and your pipeline.
When looking at an appropriate Release Management tool, you can consider:
●● Does it integrate with your company's Active Directory?
●● Can you set up roles and permissions?
●● Is there change history of the release pipeline itself?
●● Can you ensure the artifact did not change during the release?
●● Can you link requirements to the release?
●● Can you link source code changes to the release pipeline?
●● Can you enforce approval or 4-eyes principle?
●● Can you see release history and the people who triggered the release?
Jenkins
The leading open-source automation server, Jenkins provides hundreds of plugins to support building,
deploying, and automating any project.
●● On-prem system. Offered as SaaS by third-party
309
Links
●● Jenkins18
●● Tutorial: Jenkins CI/CD to deploy an ASP.NET Core application to Azure Web App service19
●● Azure Friday - Jenkins CI/CD with Service Fabric20
Circle CI
CircleCI’s continuous integration and delivery platform help software teams rapidly release code with
confidence by automating the build, test, and deploy process. CircleCI offers a modern software develop-
ment platform that lets teams ramp quickly, scale easily, and build confidently every day.
●● CircleCI is a cloud-based system or an on-prem system.
●● Rest API — you have access to projects, build and artifacts.
●● The result of the build is going to be an artifact.
●● Integration with GitHub and BitBucket.
●● Integrates with various clouds.
●● Not part of a bigger suite.
●● Not fully customizable.
Links
●● circleci/21
●● How to get started on CircleCI 2.0: CircleCI 2.0 Demo22
18 https://jenkins.io/
19 https://cloudblogs.microsoft.com/opensource/2018/09/21/configure-jenkins-cicd-pipeline-deploy-asp-net-core-application/
20 https://www.youtube.com/watch?v=5RYmooIZqS4
21 https://circleci.com/
22 https://www.youtube.com/watch?v=KhjwnTD4oec
310
●● Integration with many build and source control systems (Github, Jenkins, Azure Repos, Bitbucket,
Team Foundation Version Control, etc.)
●● Cross Platform support, all languages, and platforms
●● Rich marketplace with extra plugins, build tasks and release tasks and dashboard widgets.
●● Part of the Azure DevOps suite. Tightly integrated
●● Fully customizable
●● Manual approvals and Release Quality Gates supported
●● Integrated with (Azure) Active Directory
●● Extensive roles and permissions
Links
●● Azure Pipelines23
●● Building and Deploying your Code with Azure Pipelines24
GitLab Pipelines
GitLab helps teams automate the release and delivery of their applications to enable them to shorten the
delivery lifecycle, streamline manual processes and accelerate team velocity. With Continuous Delivery
(CD), built into the pipeline, deployment can be automated to multiple environments like staging and
production, and support advanced features such as canary deployments. Because the configuration and
definition of the application are version controlled and managed, it is easy to configure and deploy your
application on demand.
GitLab25
Atlassian Bamboo
Bamboo is a continuous integration (CI) server that can be used to automate the release management for
a software application, creating a Continuous Delivery pipeline.
Atlassian Bamboo26
XL Deploy/XL Release
XL Release is an end-to-end pipeline orchestration tool for Continuous Delivery and DevOps teams. It
handles automated tasks, manual tasks, and complex dependencies and release trains. And XL Release is
designed to integrate with your change and release management tools.
xl-release - XebiaLabs27
23 https://azure.microsoft.com/en-us/services/devops/pipelines/
24 https://www.youtube.com/watch?v=NuYDAs3kNV8
25 https://about.gitlab.com/stages-devops-lifecycle/release/
26 https://www.atlassian.com/software/bamboo/features
27 https://xebialabs.com/products/xl-release/
311
Labs
Lab10a: Controlling deployments using Release
Gates
Lab overview
This lab covers the configuration of the deployment gates and details how to use them to control
execution of Azure pipelines. To illustrate their implementation, you will configure a release definition
with two environments for an Azure Web App. You will deploy to the Canary environment only when
there are no blocking bugs for the app and mark the Canary environment complete only when there are
no active alerts in Application Insights of Azure Monitor.
A release pipeline specifies the end-to-end release process for an application to be deployed across a
range of environments. Deployments to each environment are fully automated by using jobs and tasks.
Ideally, you do not want new updates to the applications to be exposed to all the users at the same time.
It is a best practice to expose updates in a phased manner i.e. expose to a subset of users, monitor their
usage and expose to other users based on the experience of the initial set of users.
Approvals and gates enable you to take control over the start and completion of the deployments in a
release. With approvals, you can wait for users to manually approve or reject deployments. Using release
gates, you can specify application health criteria that must be met before release is promoted to the next
environment. Prior to or after any environment deployment, all the specified gates are automatically
evaluated until they all pass or until they reach your defined timeout period and fail.
Gates can be added to an environment in the release definition from the pre-deployment conditions or
the post-deployment conditions panel. Multiple gates can be added to the environment conditions to
ensure all the inputs are successful for the release.
As an example:
●● Pre-deployment gates ensure there are no active issues in the work item or problem management
system before deploying a build to an environment.
●● Post-deployment gates ensure there are no incidents from the monitoring or incident management
system for the app after it’s been deployed, before promoting the release to the next environment.
There are 4 types of gates included by default in every account.
●● Invoke Azure function: Triggers execution of an Azure function and ensures a successful completion.
●● Query Azure monitor alerts: Observes the configured Azure monitor alert rules for active alerts.
●● Invoke REST API: Makes a call to a REST API and continues if it returns a successful response.
●● Query Workitems: Ensures the number of matching work items returned from a query is within a
threshold.
Objectives
After you complete this lab, you will be able to:
●● Configure release pipelines
●● Configure release gates
●● Test release gates
312
Lab duration
●● Estimated time: 75 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions28
Objectives
After you complete this lab, you will be able to:
●● create a release dashboard
●● use REST API to query release information
Lab duration
●● Estimated time: 45 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions29
28 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
29 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
313
Review Question 2
What can you use to prevent a deployment in Azure DevOps when a security testing tool finds a compliance
problem?
Review Question 3
Even if you create exactly what a user requested at the start of the project, the solution will often be unsuita-
ble for the same user. Why?
314
Answers
When you want to change an immutable object of any type, what do you do?
You make a new one and (possibly) remove the old one.
What can you use to prevent a deployment in Azure DevOps when a security testing tool finds a compli-
ance problem?
Release gate
Even if you create exactly what a user requested at the start of the project, the solution will often be
unsuitable for the same user. Why?
Module overview
Module overview
Continuous Delivery is much more about enabling teams within your organization. Enable them to deliver
the software on demand. Making it possible that you can press a button at any time of the day, and still
have a good product means several things. It says that the code needs to be high quality, the build needs
to be fully automated and tested, and the deployment of the software needs to be fully automated and
tested as well.
Now we need to dive a little bit further into the release management tooling. We will include a lot of
things coming from Azure pipelines. A part of the Azure DevOps suite. Azure DevOps is an integrated
solution for implementing DevOps and Continuous Delivery in your organization. We will cover some
specifics of Azure pipelines, but this does not mean they do not apply for other products available in the
marketplace. Many of the other tools share the same concepts and only differ in naming.
Release pipelines
A release pipeline, in its simplest form, is nothing more than the execution of several steps. In this
module, we will dive a little bit further into the details of one specific stage. The steps that need to be
executed and the mechanism that you need to execute the steps within the pipeline.
In this module, we will talk about agent and agent pools that you might need to execute your release
pipeline. We will look at variables for the release pipeline and the various stages.
After that, we dive into the tasks that you can use to execute your deployment. Do you want to use script
files, or do you want to use specific tasks that can perform one job outstanding? For example, the
marketplaces of both Azure DevOps and Jenkins have a lot of tasks in the store that you can use to make
your life a lot easier.
We will talk about secrets and secret management in your pipeline. A fundamental part to secure your
not only your assets but also the process of releasing your software. At the end of the module, we will
talk about alerting mechanisms. How to report on your software, how to report on your quality and how
316
to get notified by using service hooks. Finally, we will dive a little bit further into automatic approvals
using automated release gates.
Learning objectives
After completing this module, students will be able to:
●● Explain the terminology used in Azure DevOps and other Release Management Tooling
●● Describe what a Build and Release task is, what it can do, and some available deployment tasks
●● Explain why you sometimes need multiple release jobs in one release pipeline
●● Differentiate between multi-agent and multi-configuration release job
●● Use release variables and stage variables in your release pipeline
●● Deploy to an environment securely using a service connection
●● List the different ways to inspect the health of your pipeline and release by using alerts, service hooks,
and reports
317
Add steps to specify what you want to build, the tests that you want to run, and all the other steps
needed to complete the build process. There are steps for building, testing, running utilities, packaging,
and deploying.
If a task is not available, you can find a lot of community tasks in the marketplace. Jenkins, Azure DevOps
and Atlassian have an extensive marketplace where additional tasks can be found.
321
Links
For more information, see also:
●● Task types & usage1
●● Tasks for Azure2
●● Atlassian marketplace3
●● Jenkins Plugins4
●● Azure DevOps Marketplace5
Release jobs
You can organize your build or release pipeline into jobs. Every build or deployment pipeline has at least
one job.
A job is a series of tasks that run sequentially on the same target. This can be a Windows server, a Linux
server, a container, or a deployment group. A release job is executed by a build/release agent. This agent
can only execute one job at the same time.
During the design of your job, you specify a series of tasks that you want to run on the same agent. At
runtime (when either the build or release pipeline is triggered), each job is dispatched as one or more
jobs to its target.
A scenario that speaks to the imagination, where Jobs play an essential role is the following.
Assume that you built an application, with a backend in .NET, a front end in Angular and a native IOS
mobile App. This might be developed in 3 different source control repositories triggering three different
builds, delivering three different artifacts.
The release pipeline brings the artifacts together and wants to deploy the backend, frontend, and Mobile
App all together as part of 1 release. The deployment needs to take place on different agents. An IOS app
needs to be built and distributed from a Mac, and the angular app is hosted on Linux so best deployed
from a Linux machine. The backend might be deployed from a Windows machine.
Because you want all three deployments to be part of one pipeline, you can define multiple Release Jobs,
which target the different agents, server, or deployment groups.
By default, jobs run on the host machine where the agent is installed. This is convenient and typically
well-suited for projects that are just beginning to adopt continuous integration (CI). Over time, you may
find that you want more control over the stage where your tasks run.
1 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml
2 https://github.com/microsoft/azure-pipelines-tasks
3 https://marketplace.atlassian.com/addons/app/bamboo/trending
4 https://plugins.jenkins.io/
5 https://marketplace.visualstudio.com/
322
6 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=vsts&tabs=yaml
323
●● Multi-agent: Run the same set of tasks on multiple agents using the specified number of agents. For
example, you can run a broad suite of 1000 tests on a single agent. Or you can use two agents and
run 500 tests on each one in parallel.
For more information, see Specify jobs in your pipeline7.
7 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=vsts&tabs=designer#multi-configuration
324
On-premises servers
In most cases, when you deploy to an on-premises server, the hardware and the operating system is
already in place. The server is already there and ready Sometimes empty but most of the times not. In this
case, the release pipeline can focus on deploying the application only.
In some cases, you might want to start or stop a virtual machine (for example Hyper-V or VMWare). The
scripts that you use to start or stop the on-premises servers should be part of your source control and be
delivered to your release pipeline as a build artifact. Using a task in the release pipeline, you can run the
script that starts or stops the servers.
When you want to take it one step further and you want to configure the server as well, you should look
at technologies like PowerShell Desired State Configuration(DSC), or use tools like Puppet and Chef. All
these products will maintain your server and keep it in a particular state. When the server changes its
state, they (Puppet, Chef, DSC) recover the changed configuration to the original configuration.
Integrating a tool like Puppet, Chef, or Powershell DSC into the release pipeline is no different from any
other task you add.
Infrastructure as a service
When you use the cloud as your target environment things change a little bit. Some organizations did a
lift and shift from their on-premises server to cloud servers. Then your deployment works the same as to
an on-premises server. But when you use the cloud to provide you with Infrastructure as a Service (IaaS),
you can leverage the power of the cloud, to start and create servers when you need them.
This is where Infrastructure as Code (IaC) starts playing a significant role. By creating a script or template,
you can create a server or other infrastructural components like a SQL server, a network, or an IP address.
By defining a template or using a command line and save it in a script file, you can use that file in your
release pipeline tasks to execute this on your target cloud. As part of your pipeline, the server (or another
component) will be created. After that, you can execute the steps to deploy the software.
325
Technologies like Azure Resource Manager (ARM) or Terraform are great to create infrastructure on
demand.
Platform as a Service
When you are moving from Infrastructure as a Service (IaaS) towards Platform as a Service (PaaS), you will
get the infrastructure from the cloud that you are running on.
For example: In Azure, you can choose to create a Web application. The server, the hardware, the net-
work, the public IP address, the storage account, and even the web server, is arranged by the cloud. The
user only needs to take care of the web application that will run on this platform.
The only thing that you need to do is to provide the templates which instruct the cloud to create a
WebApp. The same goes for Functions as a Service(FaaS or Serverless technologies. In Azure called Azure
Functions and in AWS called AWS Lambda.
You only deploy your application, and the cloud takes care of the rest. However, you need to instruct the
platform (the cloud) to create a placeholder where your application can be hosted. You can define this
template in ARM or Terraform. You can use the Azure CLI or command line tools or in AWS use CloudFor-
mation. In all cases, the infrastructure is defined in a script file and live alongside the application code in
source control.
Clusters
Finally, you can deploy your software to a cluster. A cluster is a group of servers that work together to
host high-scale applications.
When you run a cluster as Infrastructure as a Service, you need to create and maintain the cluster. This
means that you need to provide the templates to create a cluster. You also need to make sure that you
roll out updates, bug fixes and patches to your cluster. This is comparable with Infrastructure as a Service.
When you use a hosted cluster, you should consider this as Platform as a Service. You instruct the cloud
to create the cluster, and you deploy your software to the cluster. When you run a container cluster, you
can use the container cluster technologies like Kubernetes or Docker Swarm.
Service connections
In addition to the environments, when a pipeline needs access to resources, you will often need to
provision service connections.
Summary
Regardless of the technology, you choose to host your application, the creation, or at least configuration
of your infrastructure should be part of your release pipeline and part of your source control repository.
Infrastructure as Code is a fundamental part of Continuous Delivery and gives you the freedom to create
servers and environments on demand.
Links
●● AWS Cloudformation8
8 https://aws.amazon.com/cloudformation/
326
●● Terraform9
●● Powershell DSC10
●● AWS Lambda11
●● Azure Functions12
●● Chef13
●● Puppet14
●● Azure Resource Manager /ARM15
Steps
You can set up a service connection to environments to create a secure and safe connection to the
environment that you want to deploy to. Service connections are also used to get resources from other
places in a secure manner. For example, you might need to get your source code from GitHub.
In this case, let's look at configuring a service connection to Azure.
1. From the main menu in the Parts Unlimited project, click Project settings at the bottom of the
screen.
2. In the Project Settings pane, from the Pipelines section, click Service connections. Click the drop
down beside +New service connection.
9 https://www.terraform.io/
10 https://docs.microsoft.com/en-us/powershell/scripting/dsc/overview/overview?view=powershell-7
11 https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
12 https://azure.microsoft.com/en-us/services/functions
13 https://www.chef.io/chef/
14 https://puppet.com/
15 https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
327
As you can see, there are many types of service connections. You can create a connection to the Apple
App Store or to the Docker Registry, to Bitbucket, or to Azure Service bus.
In this case, we want to deploy a new Azure resource, so we'll use the Azure Resource Manager option.
3. Click Azure Resource Manager to add a new service connection.
4. Set the Connection name to ARM Service Connection, click on an Azure Subscription, then select
an existing Resource Group.
Note: You might be prompted to logon to Azure at this point. If so, logon first.
328
Notice that what we are creating is a Service Principal. We will be using the Service Principal as a means
of authenticating to Azure. At the top of the window, there is also an option to set up Managed Identity
Authentication instead.
The Service Principal is a type of service account that only has permissions in the specific subscription and
resource group. This makies it a very safe way to connect from the pipeline.
5. Click OK to create it. It will then be shown in the list.
6. In the main Parts Unlimited menu, click Pipelines then Releases, then Edit to see the release pipeline.
Click the link to View stage tasks.
329
The current list of tasks is then shown. Because we started with an empty template, there are no tasks yet.
Each stage can execute many tasks.
7. Click the + sign to the right of Agent job to add a new task. Note the available list of task types.
8. In the Search box, enter the word storage and note the list of storage-related tasks. These include
standard tasks, and tasks available from the Marketplace.
330
We will use the Azure file copy task to copy one of our source files to a storage account container.
9. Hover over the Azure file copy task type and click Add when it appears. The task will be added to the
stage but requires further configuration.
10. Click the File Copy task to see the required settings.
331
11. Set the Display Name to Backup website zip file, then click the ellipsis beside Source and locate the
file as follows, then click OK to select it.
332
We then need to provide details of how to connect to the Azure subscription. The easiest and most
secure way to do that is to use our new Service Connection.
12. From the Azure Subscription drop down list, find and select the ARM Service Connection that we
created.
13. From the Destination Type drop down list, select Azure Blob, and from the RM Storage Account
and Container Name, select the storage account, and enter the name of the container, then click
Save at the top of the screen and OK.
14. To test the task, click Create release, and in the Create a new release pane, click Create.
15. Click the new release to view the details.
16. On the release page, approve the release so that it can continue.
17. Once the Development stage has completed, you should see the file in the Azure storage account.
333
A key advantage of using service connections is that this type of connection is managed in a single place
within the project settings, and doesn't involve connection details spread throughout the pipeline tasks.
334
Steps
Let's now look at how a release pipeline can reuse groups of tasks.
It's common to want to reuse a group of tasks in more than one stage within a pipeline or in different
pipelines.
1. In the main menu for the Parts Unlimited project, click Pipelines then click Task groups.
You will notice that you don't currently have any task groups defined.
16 https://docs.microsoft.com/en-us/azure/devops/pipelines/library/task-groups?view=vsts
17 https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-
schema#template-references
335
There is an option to import task groups but the most common way to create a task group is directly
within the release pipeline, so let's do that.
2. In the main menu, click Pipelines then click Releases, and click Edit to open the pipeline that we have
been working on.
3. The Development stage currently has a single task. We will add another task to that stage. Click the
View stage tasks link to open the stage editor.
4. Click the + sign to the right of the Agent job line to add a new task. In the Search box, type data-
base.
336
6. Set the Display name to Deploy devopslog database, and from the Azure Subscriptions drop
down list, click ARM Service Connection.
Note: we can reuse our service connection here
7. In the SQL Database section, set a unique name for the SQL Server, set the Database to devopslog,
set the Login to devopsadmin, and set any suitable password.
337
8. In the Deployment Package section, set the Deploy type to Inline SQL Script, set the Inline SQL
Script to:
CREATE TABLE dbo.TrackingLog
(
TrackingLogID int IDENTITY(1,1) PRIMARY KEY,
TrackingDetails nvarchar(max)
);
11. Click Create task group, then in the Create task group window, set Name to Backup website zip
file and deploy devopslog. Click the Category drop down list to see the available options. Ensure
that Deploy is selected, and click Create.
In the list of tasks, the individual tasks have now disappeared, and the new task group appears instead.
12. From the Task drop down list, select the Test Team A stage.
339
13. Click the + sign to the right of Agent job to add a new task. In the Search box, type backup and
notice that the new task group appears like any other task.
14. Hover on the task group and click Add when it appears.
Task groups allow for each reuse of a set of tasks and limits the number of places where edits need to
occur.
Walkthrough cleanup
15. Click Remove to remove the task group from the Test Team A stage.
16. From the Tasks drop down list, select the Development stage. Again click Remove to remove the
task group from the Development stage.
17. Click Save then OK.
340
Predefined variables
When running your release pipeline, there are always variables that you need that come from the agent
or context of the release pipeline. For example, the agent directory where the sources are downloaded,
the build number or build id, the name of the agent or any other information. This information is usually
accessible in pre-defined variables that you can use in your tasks.
Stage variables
Share values across all the tasks within one specific stage by using stage variables. Use a stage-level
variable for values that vary from stage to stage (and are the same for all the tasks in a stage).
Variable groups
Share values across all the definitions in a project by using variable groups. We will cover variable groups
later in this module.
18 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/variables?view=vsts&tabs=batch
341
Variable groups
A variable group is used to store values that you want to make available across multiple builds and
release pipelines.
Examples
●● Store the username and password for a shared server
●● Store a share connection string
●● Store the geolocation of an application
●● Store all settings for a specific application
For more information, see Variable Groups for Azure Pipelines and TFS19.
Steps
Let's now look at how a release pipeline can make use of predefined sets of variables, called Variable
Groups.
Like the way we used task groups, variable groups provide a convenient way to avoid the need to
redefine many variables when defining stages within pipelines, and even when working across multiple
pipelines. Let's create a variable group and see how it can be used.
1. On the main menu for the Parts Unlimited project, click Pipelines, then click Library. There are
currently no variable groups in the project.
19 https://docs.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=vsts
342
2. Click + Variable group to commence creating a variable group. Set Variable group name to Web-
site Test Product Details.
3. In the Variables section, click +Add, then in Name, enter ProductCode, and in Value, enter RED-
POLOXL.
You can see an extra column that shows a lock. It allows you to have variable values that are locked and
not displayed in the configuation screens. While this is often used for values like passwords, notice that
there is an option to link secrets from an Azure key vault as variables. This would be a preferable option
for variables that are providing credentials that need to be secured outside the project.
In this example, we are just providing details of a product that will be used in testing the website.
4. Add another variable called Quantity with a value of 12.
5. Add another variable called SalesUnit with a value of Each.
343
7. On the main menu, click Pipelines, then click Releases, then click Edit to return to editing the release
pipeline that we have been working on. From the top menu, click Variables.
Variable groups are linked to pipelines, rather than being directly added to them.
9. Click Link variable group , then in the Link variable group pane, click to select the Website Test
Product Details variable group (notice that it shows you how many variables are contained), then in
the Variable group scope, select the Development, Test Team A, and Test Team B stages.
344
We need the test product for development and during testing, but we do not need it in production. If it
were needed in all stages, we would have chosen Release for the Variable group scope instead.
10. Click Link to complete the link.
The variables contained in the variable group are now available for use within all stages except Produc-
tion, just the same way as any other variable.
345
20 https://docs.microsoft.com/en-us/azure/devops/extend/develop/add-build-task?view=vsts
346
>source: http://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants/
We can make four quadrants where each side of the square defines what we are targeting with our tests.
●● Business facing - the tests are more functional and most of the time executed by end users of the
system or by specialized testers that know the problem domain very well.
●● Supporting the Team - it helps a development team to get constant feedback on the product so they
can find bugs fast and deliver a product with quality build in
●● Technology facing - the tests are rather technical and non-meaningful to businesspeople. They are
typical tests written and executed by the developers in a development team.
●● Critique Product - tests that are there to validate the workings of a product on it’s functional and
non-functional requirements.
347
Now we can place different test types we see in the different quadrants.
e.g., we can put functional tests, Story tests, prototypes, and simulations in the first quadrant. These tests
are there to support the team in delivering the right functionality and are business facing since they are
more functional.
In quadrant two we can place tests like exploratory tests, Usability tests, acceptance tests, etc.
In quadrant three we place tests like Unit tests, Component tests, and System or integration tests.
In quadrant four we place Performance tests, load tests, security tests, and any other non-functional
requirements test.
Now if you look at these quadrants, you can see that specific tests are easy to automate or are automat-
ed by nature. These tests are in quadrant 3 and 4.
Tests that are automatable but most of the time not automated by nature are the tests in quadrant 1.
Tests that are the hardest to automate are in quadrant 2.
What we also see is that the tests that cannot be automated or are hard to automate are tests that can be
executed in an earlier phase and not after release. This is what we call shift-left where we move the
testing process more towards the development cycle.
We need to automate as many tests as possible. And we need to test as soon as possible. A few of the
principles we can use are:
●● Tests should be written at the lowest level possible.
●● Write once, run anywhere including production system.
●● Product is designed for testability.
●● Test code is product code; only reliable tests survive.
●● Test ownership follows product ownership.
By testing at the lowest level possible, you will find that you have many tests that do not require infra-
structure or applications to be deployed. For the tests that need an app or infrastructure, we can use the
pipeline to execute them.
To execute tests within the pipeline, we can run scripts or use tools that execute certain types of tests. On
many occasions, these are external tools that you execute from the pipeline, like Owasp ZAP, SpecFlow,
or Selenium. In other occasions, you can use test functionality from a platform like Azure. For example,
Availability or Load Tests that are executed from within the cloud platform.
When you want to write your own automated tests, choose the language that resembles the language
from your code. In most cases, the developers that write the application should also write the test, so it
makes sense to use the same language. For example, write tests for your .Net application in .Net, and
write tests for your Angular application in Angular.
To execute Unit Tests or other low-level tests that do not need a deployed application or infrastructure,
the build and release agent can handle this. When you need to execute tests with a UI or other special-
ized functionality, you need to have a Test agent that can run the test and report the results back.
Installation of the test agent then needs to be done up front, or as part of the execution of your pipeline.
21 https://docs.microsoft.com/en-us/azure/devops/pipelines/test/set-up-continuous-test-environments-builds?view=vsts
22 https://docs.microsoft.com/en-us/azure/devops/test/load-test/overview?view=vsts
23 https://azure.microsoft.com/nl-nl/blog/creating-a-web-test-alert-programmatically-with-application-insights/
24 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-monitor-web-app-availability
349
Release gates
Release gates allow automatic collection of health signals from external services and then promote the
release when all the signals are successful at the same time or stop the deployment on timeout. Typically,
gates are used in connection with incident management, problem management, change management,
monitoring, and external approval systems. Release gates are discussed in an upcoming module.
Service hooks
Service hooks enable you to perform tasks on other services when events happen in your Azure DevOps
Services projects. For example, create a card in Trello when a work item is created or send a push notifica-
tion to your team's Slack when a build fails. Service hooks can also be used in custom apps and services
as a more efficient way to drive activities when events happen in your projects.
Reporting
Reporting is the most static approach when it comes to inspection, but in many cases also the most
evident. Creating a dashboard which shows the status of your build and releases combined with team
specific information is n many cases a valuable asset to get insights.
Read more at About dashboards, charts, reports, & widgets25.
25 https://docs.microsoft.com/en-us/azure/devops/report/dashboards/overview?view=vsts
350
The ability to receive Alerts and notifications is a powerful mechanism to get notified about certain events
in your system when they happen.
For example, when a build takes a while to complete, probably you do not want to stare to the screen
until it has finished. But you want to know when it does.
Getting an email or another kind of notification instead is very powerful and convenient. Another exam-
ple is a system that needs to be monitored. You want to get notified by the system in real time. By
implementing a successful alert mechanism, you can use alerts to react to situations before anybody is
bothered by it proactively.
Alerts
However, when you define alerts, you need to be careful. When you get alerts for every single event that
happens in the system, your mailbox will quickly be flooded with a lot of alerts. The more alerts you get
that are not relevant, the higher the change that people will never look at the alerts and notifications and
will miss out on the important ones.
Service hooks
Service hooks enable you to perform tasks on other services when events happen in your Azure DevOps
Services projects. For example, create a card in Trello when a work item is created or send a push notifica-
tion to your team's mobile devices when a build fails. Service hooks can also be used in custom apps and
services as a more efficient way to drive activities when events happen in your projects.
Azure DevOps includes built-in support for the following Service Hooks:
Build and release Collaborate Customer support Plan and track Integrate
AppVeyor Campfire UserVoice Trello Azure Service Bus
26 https://docs.microsoft.com/en-us/azure/devops/notifications/index?view=vsts
27 https://docs.microsoft.com/en-us/azure/devops/notifications/concepts-events-and-notifications?view=vsts
351
Build and release Collaborate Customer support Plan and track Integrate
Bamboo Flowdock Zendesk Azure Storage
Jenkins HipChat Web Hooks
MyGet Hubot Zapier
Slack
This list will change over time.
To learn more about service hooks and how to use and create them, read Service Hooks in Azure
DevOps28.
Steps
Let's now look at how a release pipeline can communicate with other services by using service hooks.
Azure DevOps can be integrated with a wide variety of other applications. It has built in support for many
applications, and generic hooks for working with other applications. Let's look.
1. Below the main menu for the Parts Unlimited project, click Project settings.
28 https://docs.microsoft.com/en-us/azure/devops/service-hooks/overview?view=vsts
352
By using service hooks, we can notify other applications that an event has occurred within Azure DevOps.
We could also send a message to a team in Microsoft Teams or Slack. We could also trigger an action in
Bamboo or Jenkins.
4. Scroll to the bottom of the list of applications and click on Web Hooks.
353
If the application that you want to communicate with isn't in the list of available application hooks, you
can almost always use the Web Hooks option as a generic way to communicate. It allows you to make an
HTTP POST when an event occurs. So, if for example, you wanted to call an Azure Function or an Azure
Logic App, you could use this option.
To demonstrate the basic process for calling web hooks, we'll write a message into a queue in the Azure
Storage account that we have been using.
5. From the list of available applications, click Azure Storage.
354
6. Click Next. In the Trigger page, we determine which event causes the service hook to be called. Click
the drop down for Trigger on this type of event to see the available event types.
7. Ensure that Release deployment completed is selected, then in the Release pipeline name select
Release to all environments. For Stage, select Production. Drop down the list for Status and note
the available options.
355
9. In the Action page, enter the name of your Azure storage account.
10. Open the Azure Portal, and from the settings for the storage account, in the Access keys section, copy
the value for Key.
11. Back in the Action page in Azure DevOps, paste in the key.
356
13. Make sure that the test succeeded, then click Close, and on the Action page, click Finish.
16. If the release is waiting for approval, click to approve it and wait for the release to complete success-
fully.
358
Note: if you have run multiple releases, you might have multiple messages
19. Click the latest message (usually the bottom of the list) to open it and review the message properties,
then close the Message properties pane.
359
You have successfully integrated this message queue with your Azure DevOps release pipeline.
360
Labs
Lab 11a: Configuring pipelines as code with
YAML
Lab overview
Many teams prefer to define their build and release pipelines using YAML. This allows them to access the
same pipeline features as those using the visual designer, but with a markup file that can be managed
like any other source file. YAML build definitions can be added to a project by simply adding the corre-
sponding files to the root of the repository. Azure DevOps also provides default templates for popular
project types, as well as a YAML designer to simplify the process of defining build and release tasks.
Objectives
After you complete this lab, you will be able to:
●● configure CI/CD pipelines as code with YAML in Azure DevOps
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions29
Objectives
After you complete this lab, you will be able to:
●● Configure a self-hosted Azure DevOps agent
●● Configure release pipeline
●● Trigger build and release
29 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
30 http://www.seleniumhq.org/
361
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions31
31 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
362
Review Question 2
What should you create to store values that you want to make available across multiple build and release
pipelines?
Review Question 3
How can you provision the agents for deployment groups in each of your VMs?
Review Question 4
How can you identify a default release variable?
363
Answers
How many deployment jobs can be run concurrently by a single agent?
One
What should you create to store values that you want to make available across multiple build and release
pipelines?
Variable group
How can you provision the agents for deployment groups in each of your VMs?
Module overview
Module overview
This module is about implementing an appropriate deployment pattern.
Learning objectives
After completing this module, students will be able to:
●● Describe deployment patterns
●● Implement blue green deployment
●● Implement canary release
●● Implement progressive exposure deployment
366
Testing strategy
Your testing strategy should be in place. If you need to run a lot of manual tests to validate your software,
this is a bottleneck to deliver on demand.
Coding practices
If your software is not written in a safe and maintainable manner, the chances are that you cannot
maintain a high release cadence. When your software is complex because of a large amount of technical
Debt, it is hard to change the code quickly and reliably. Writing high-quality software and high-quality
tests are, therefore, an essential part of Continuous Delivery.
Architecture
The architecture of your application is always significant. But when implementing Continuous Delivery, it
is maybe even more so. If your software is a monolith with a lot of tight coupling between the various
components, it is difficult to deliver your software continuously. Every part that is changed might impact
other parts that did not change. Automated tests can track a lot of these unexpected dependencies, but it
is still hard. There is also the time aspect when working with different teams. When Team A relies on a
service of Team B, Team A cannot deliver until Team B is done. This introduces another constraint on
delivery.
Continuous Delivery for large software products is hard. For smaller parts, it is easier. Therefore, breaking
up your software into smaller, independent pieces, is in many cases a good solution. One approach to
solving these issues is to implement microservices.
Microservices architecture
Today, you will frequently hear the term microservices. A microservice is an autonomous, independent
deployable, and scalable software component. They are small, and they are focused on doing one thing
very well, and they can run autonomously. If one micro service changes, it should not impact any other
microservices within your landscape. By choosing a microservices architecture, you will create a landscape
of services that can be developed, tested, and deployed separately from each other.
Of course, this implies other risks and complexity. You need to create to keep track of interfaces and how
they interact with each other. And you need to maintain multiple application lifecycles instead of one.
367
In a traditional application, we can often see a multi-layer architecture. One layer with the UI, a layer with
the business logic and services and a layer with the data services. Sometimes there are dedicated teams
for the UI and the backend. When something needs to change, it needs to change in all the layers.
When moving towards a microservices architecture, all these layers are part of the same microservice.
Only the microservice contains one specific function. The interaction between the microservices is done in
an asynchronous matter. They do not call each other directly but make use of asynchronous mechanisms
like queues or events.
Each microservice has its own lifecycle and Continuous Delivery pipeline. If you built them correctly, you
could deploy new versions of a microservice without impacting other parts of the system.
A microservice architecture is undoubtedly not a prerequisite for doing Continuous Delivery, but smaller
software components certainly help in implementing a fully automated pipeline.
The software was built, and when all features had been implemented, the software was deployed to an
environment where a group of people could start using it.
The traditional or classical deployment pattern was moving your software to a development stage, a
testing stage, maybe an acceptance or staging stage, and finally a production stage. The software moved
as one piece through the stages. The production release was in most cases a Big Bang release, where
users were confronted with a lot of changes at the same time. Despite the different stages to test and
validate, this approach still involves a lot of risks. By running all your tests and validation on non-produc-
tion environments, it is hard to predict what happens when your production users start using it. Of
course, you can run load tests and availability tests, but in the end, there is no place like production.
368
Deployment slots
When using a cloud platform like Azure, doing blue-green deployments is relatively easy. You do not
need to write your own code or set up infrastructure. When using web apps, you can use an out-of-the-
box feature called deployment slots.
Deployment slots are a feature of Azure App Service. They are live apps with their own hostnames. You
can create different slots for your application (e.g., Dev, Test or Stage). The production slot is the slot
where your live app resides. With deployment slots, you can validate app changes in staging before
swapping it with your production slot.
370
You can use a deployment slot to set up a new version of your application, and when ready, swap the
production environment with the new, staging environment. This is done by an internal swapping of the
IP addresses of both slots.
To learn more about Deployment slots, see also:
●● Set up Staging Environments in Azure App Service1
●● Considerations on using Deployment Slots in your DevOps Pipeline2
Steps
Let's now look at how a release pipeline can be used to implement blue-green deployments.
We'll start by creating a new project with a release pipeline that can perform deployments, by using the
Parts Unlimited template again.
3. Click on the PartsUnlimited project (not the PartsUnlimited-YAML project), and click Select Tem-
plate, then click Create Project. When the deployment completes, click Navigate to project.
4. In the main menu for PU Hosted, click Pipelines, then click Builds, then Queue and finally Run to
start a build.
1 https://docs.microsoft.com/en-us/azure/app-service/deploy-staging-slots
2 https://blogs.msdn.microsoft.com/devops/2017/04/10/considerations-on-using-deployment-slots-in-your-devops-pipeline/
371
5. In the main menu, click Releases. Because a continuous integration trigger was in place, a release was
attempted. However, we have not yet configured the release so it will have failed. Click Edit to enter
edit mode for the release.
6. From the drop-down list beside Tasks, select the Dev stage, then click to select the Azure Deploy-
ment task.
7. In the Azure resource group deployment pane, select your Azure subscription, then click Authorize
when prompted. When authorization completes, select a Location for the web app.
Note: you might be prompted to log in to Azure at this point
372
8. In the task list, click Azure App Service Deploy to open its settings. Again, select your Azure sub-
scription. Set the Deployment slot to Staging.
373
Note: the template creates a production site and two deployment slots: Dev and Staging. We will use
Staging for our Green site.
9. In the task list, click Dev and in the Agent job pane, select Azure Pipelines for the Agent pool and
vs2017-win2016 for the Agent Specification.
10. From the top menu, click Pipelines. Click the Dev stage, and in the properties window, rename it to
Green Site. Click the QA stage and click Delete and Confirm. Click the Production stage and click
Delete and Confirm. Click Save then OK.
374
11. Hover over the Green Site stage and click the Clone icon when it appears. Change the Stage name to
Production. From the Tasks drop down list, select Production.
12. Click the Azure App Service Deploy task and uncheck the Deploy to slot option. Click Save and OK.
The production site isn't deployed to a deployment slot. It is deployed to the main site.
13. Click Create release then Create to create the new release. When it has been created, click the release
link to view its status.
15. Open a new browser tab and navigate to the copied URL. It will take the application a short while to
compile but then the Green website (on the Staging slot) should appear.
Note: you can tell that the staging slot is being used because of the -staging suffix in the website URL
16. Open another new browser tab and navigate to the same URL but without the -staging slot. The
production site should also be working.
376
Note: Leave both browser windows open for later in the walkthrough
20. From the Tasks drop down list, click to select the Swap Blue-Green stage. Click the + to the right-
hand side of Agent Job to add a new task. In the Search box, type cli.
377
21. Hover over the Azure CLI template and when the Add button appears, click it, then click to select the
Azure CLI task, to open its settings pane.
22. Configure the pane as follows, with your subscription, a Script Location of Inline script, and the
Inline Script as follows:
az webapp deployment slot swap -g $(ResourceGroupName) -n $(WebsiteName) –slot Staging –tar-
get-slot production
378
23. From the menu above the task list, click Pipeline. Click the Pre-deployment conditions icon for the
Swap Blue-Green stage, then in the Triggers pane, enable Pre-deployment approvals.
24. Configure yourself as an approver, click Save, then OK.
We will make a cosmetic change so that we can see that the website has been updated. We'll change the
word tires in the main page rotation to tyres to target an international audience.
26. Click Edit to allow editing, then find the word tires and replace it with the word tyres. Click Commit
and Commit to save the changes and trigger a build and release.
379
27. From the main menu, click Pipelines, then Builds. Wait for the continuous integration build to
complete successfully.
28. From the main menu, click Releases. Click to open the latest release (at the top of the list).
You are now being asked to approve the deployment swap across to production. We'll check the green
deployment first.
29. Refresh the Green site (i.e., Staging slot) browser tab and see if your change has appeared. It now
shows the altered word.
380
30. Refresh the Production site browser tab and notice that it still isn't updated.
31. As you are happy with the change, in release details click Approve, then Approve and wait for the
stage to complete.
32. Refresh the Production site browser tab and check that it now has the updated code.
Final notes
If you check the production site, you'll see it now has the previous version of the code.
381
This is the key difference with using Swap, rather than just having a typical deployment process from one
staged site to the next. You have a very quick fall back option by being able to swap the sites back again
if needed.
382
Feature toggles
Introduction to feature toggles
Feature Flags allow you to change how our system works without making changes to the code. Only a
small configuration change is required. In many cases, this will also only be for a small number of users.
Feature Flags offer a solution to the need to push new code into trunk and have it deployed, but not have
it functional yet. They are commonly implemented as the value of variables that are used to control
conditional logic.
Imagine that your team are all working in the main trunk branch of a banking application. You've decided
it's worth trying to have all the work done in the main branch to avoid messy operations of merge later,
but you need to make sure that substantial changes to how the interest calculations work can happen,
and people depend on that code everyday. Worse, the changes will take you weeks to complete. You
can't leave the main code broken for that period. A Feature Flag could help you get around this. You can
change the code so that other users who don't have the Feature Flag set will keep using the original
interest calculation code, and the members of your team who are working on the new interest calcula-
tions and who have the Feature Flag set see the code that's been created. This is an example of a busi-
ness Feature Flag that's used to determine business logic.
The other type of Feature Flag is a Release Flag. Now, imagine that after you complete the work on the
interest calculation code, you're perhaps nervous about publishing a new code out to all users at once.
You have a group of users who are better at dealing with new code and issues if they arise, and these
people are often called Canaries. The name is based on the old use of Canaries in coal mines. You change
the configuration, so that the Canary users also have the Feature Flag set, and they will start to test the
new code as well. If problems occur, you can quickly disable the flag for them again.
Another release flag might be used for AB testing. Perhaps you want to find out if a new feature makes it
faster for users to complete a task. You could have half the users working with the original version of the
code and the other half of the users working with the new version of the code. You can then directly
compare the outcome and decide if the feature is worth keeping. Note that Feature Flags are sometimes
called Feature Toggles instead.
By exposing new features by just “flipping a switch” at runtime, we can deploy new software without
exposing any new or changed functionality to the end user.
The question is what strategy you want to use in releasing a feature to an end user.
●● Reveal the feature to a segment of users, so you can see how the new feature is received and used.
●● Reveal the feature to a randomly selected percentage of users.
●● Reveal the feature to all users at the same time.
The business owner plays a vital role in the process, and you need to work closely together with him to
choose the right strategy.
Just as in all the other deployment patterns and mentioned in the introduction, the most important part
is that you always look at the way the system behaves.
The whole idea of separating feature deployment from Feature exposure is compelling and something we
want to incorporate in our Continuous Delivery practice. It helps us with more stable releases and better
ways to roll back when we run into issues when we have a new feature that produces problems. We
switch it off again and then create a hotfix. By separating deployments from revealing a feature, you
create the opportunity to release any time a day, since the new software will not affect the system that
already works.
383
When the switch is off, it executes the code in the IF, otherwise the ELSE. Of course, you can make it much
more intelligent, controlling the feature toggles from a dashboard, or building capabilities for roles, users,
etc.
If you want to implement feature toggles, then there are many different frameworks available, both
commercially as Open Source.
384
For more information, see also Explore how to progressively expose your features in production for
some or all users3.
The most important thing is to remember that you need to remove the toggles from the software as
soon as possible. If you do not do that, they will become a form of technical debt if you keep them
around for too long.
As soon as you introduce a feature flag, you have added to your overall technical debt. Just like other
technical debt, they are easy to add but the longer they are part of your code, the bigger the technical
debt becomes, because you've added scaffolding logic that's needed for the branching within the code.
The cyclomatic complexity of your code keeps increasing as you add more feature flags, as the number of
possible paths through the code increases.
Using feature flags can make your code less solid and can also add these issues:
●● The code is harder to test effectively as the number of logical combinations increases.
●● The code is harder to maintain because it's more complex.
●● The code might even be less secure.
●● It can be harder to duplicate problems when they are found.
A plan for managing the lifecycle of feature flags is critical. As soon as you add a flag, you need to plan
for when it will be removed.
Feature flags shouldn't be repurposed. There have been high profile failures that have occurred because
teams decided to reuse an old flag that they thought was no longer part of the code, for a new purpose.
3 https://docs.microsoft.com/en-us/azure/devops/articles/phase-features-with-feature-flags?view=vsts
385
4 https://docs.microsoft.com/en-us/azure/azure-app-configuration/manage-feature-flags
386
Canary releases
Canary releases
The term canary release comes from the days that miners took a canary with them into the coal mines.
The purpose of the canary was to identify the existence of toxic gasses. The canary would die much
sooner than the miner, giving them enough time to escape the potentially lethal environment.
A canary release is a way to identify potential problems as soon as possible without exposing all your end
users to the issue at once. The idea is that you expose a new feature only to a minimal subset of users. By
closely monitoring what happens the moment you enable the feature, you can get relevant information
from this set of users and decide to either continue or rollback (disable the feature). If the canary release
shows potential performance or scalability problems, you can build a fix for that and apply that in the
canary environment. After the canary release has proven to be stable, you can move the canary release to
the actual production environment.
Canary releases can be implemented using a combination of feature toggles, traffic routing, and deploy-
ment slots.
●● You can route a percentage of traffic to a deployment slot with the new feature enabled.
●● You can target a specific user segment by using feature toggles.
Traffic manager
In the previous module, we saw how Deployment slots in Azure Web Apps, enable you to swap between
2 different versions of your application quickly. If you want to have more control over the traffic that
flows to your different versions, deployment slots alone are not enough. To control traffic in Azure, you
can use a component called Traffic Manager.
Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally
to services across global Azure regions while providing high availability and responsiveness.
Traffic Manager uses DNS to direct client requests to the most appropriate service endpoint based on a
traffic-routing method and the health of the endpoints.
An endpoint is an Internet-facing service hosted inside or outside of Azure. Traffic Manager provides a
range of traffic-routing methods and endpoint monitoring options to suit different application needs and
automatic failover models. Traffic Manager is resilient to failure, including the breakdown of an entire
Azure region.
387
While the available options can change over time, Traffic manager currently provides six options to
distribute traffic:
●● Priority: Select Priority when you want to use a primary service endpoint for all traffic and provide
backups in case the primary or the backup endpoints are unavailable.
●● Weighted: Select Weighted when you want to distribute traffic across a set of endpoints, either evenly
or according to weights, which you define.
●● Performance: Select Performance when you have endpoints in different geographic locations, and
you want end users to use the “closest” endpoint in terms of the lowest network latency.
●● Geographic: Select Geographic so that users are directed to specific endpoints (Azure, External, or
Nested) based on which geographic location their DNS query originates from. This empowers Traffic
Manager customers to enable scenarios where knowing a user’s geographic region and routing them
based on that is important. Examples include complying with data sovereignty mandates, localization
of content & user experience and measuring traffic from different regions.
●● Multivalue: Select MultiValue for Traffic Manager profiles that can only have IPv4/IPv6 addresses as
endpoints. When a query is received for this profile, all healthy endpoints are returned.
●● Subnet: Select Subnet traffic-routing method to map sets of end-user IP address ranges to a specific
endpoint within a Traffic Manager profile. When a request is received, the endpoint returned will be
the one mapped for that request’s source IP address.
When we look at the options the traffic manager offers, the most used option for Continuous Delivery is
the option to route traffic based on weights. (Note: traffic is only routed to endpoints that are currently
available).
For more information, see also:
●● What is Traffic Manager?5
●● How Traffic Manager works6
●● Traffic Manager Routing Methods7
5 https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-overview
6 https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-how-it-works
7 https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-routing-methods
388
Dark launching
Dark launching
Dark launching is in many ways like canary releases. However, the difference here is that you are looking
to assess the response of users to new features in your frontend, rather than testing the performance of
the backend.
The idea is that rather than launch a new feature for all users, you instead release it to a small set of users.
Usually, these users are not aware they are being used as test users for the new feature and often you do
not even highlight the new feature to them, hence the term “Dark” launching.
Another example of dark launching is launching a new feature and use it on the backend to get metrics.
Let me illustrate this with a real world “launch” example.
As Elon Musk describes in his biography, he applies all kinds of Agile development principles in his
company SpaceX. SpaceX builds and launches rockets to launch satellites. SpaceX also uses dark launch-
ing. When they have a new version of a sensor, they install it alongside the old one. All data is measured
and gathered both by the old and the new sensor. Afterward, they compare the outcomes of both
sensors. Only when the new one has the same or improved results the old sensor is replaced.
The same concept can be applied in software. You run all data and calculation through your new feature,
but it is not “exposed” yet.
A/B testing
A/B testing
A/B testing (also known as split testing or bucket testing) is a method of comparing two versions of a
webpage or app against each other to determine which one performs better. A/B testing is mostly an
experiment where two or more variants of a page are shown to users at random, and statistical analysis is
used to determine which variation performs better for a given conversion goal.
A/B testing is not part of continuous delivery or a pre-requisite for continous delivery. It is more the other
way around. Continuous delivery allows to quickly deliver MVP's to a production environment and your
end-users.
Common aims are to experiment with new features, often to see if they improve conversion rates.
Experiments are often continuous, and the impact of change is measured.
A/B testing is out of scope for this course. But because it is a powerful concept that is enabled by imple-
menting continous delivery, it is mentioned here for you to dive into further.
390
With a ring-based deployment, you deploy your changes to risk-tolerant customers first, and the pro-
gressively roll out to a larger set of customers.
The Microsoft Windows team, for example, uses these rings.
391
When you have identified multiple groups of users, and you see value in investing in a ring-based
deployment, you need to define your setup.
Some organizations that use canary releasing have multiple deployment slots set up as rings. They first
release the feature to ring 0 that is targeting a very well-known set of users, most of the time only their
internal organization. After things have been proven stable in ring 0, they then propagate the release to
the next ring that also has a limited set of users, but outside their organization.
And finally, the feature is released to everyone. The way this is often done is just by flipping the switch on
the feature toggles in the software.
Just as in the other deployment patterns, monitoring and health checks are essential. By using post-de-
ployment release gates that check a ring for health, you can define an automatic propagation to the next
ring after everything is stable. When a ring is not healthy, you can halt the deployment to the next rings
to reduce the impact.
For more information, see also Explore how to progressively expose your Azure DevOps extension
releases in production to validate, before impacting all users8.
Steps
Let's now look at how a release pipeline can be used to stage features by using ring-based deployments.
When I have a new feature, I might want to just release it to a small number of users at first, just in case
something goes wrong. In authenticated systems, I could do this by having those users as members of a
security group and letting members of that group use the new features.
However, on a public web site, I might not have logged in users. Instead, I might want to just direct a
small percentage of the traffic to use the new features. Let's see how that's configured. We'll create a new
release pipeline that isn't triggered by code changes, but manually when we want to slowly release a new
feature.
We start by assuming that a new feature has already been deployed to the Green site (i.e., the staging
slot).
1. In the main menu for the PU Hosted project, click Pipelines, then click Release, click +New, then
click New release pipeline.
2. When prompted to select a template, click Empty job from the top of the pane.
3. Click on the Stage 1 stage and rename it to Ring 0 (Canary).
8 https://docs.microsoft.com/en-us/azure/devops/articles/phase-rollout-with-rings?view=vsts
392
4. Hover over the New release pipeline name at the top of the page, and when a pencil appears, click it,
and change the pipeline name to Ring-based Deployment.
5. From the Tasks drop down list, select the Ring 0 (Canary) stage. Click the + to add a new task, and
from the list of tasks, hover of Azure CLI and when the Add button appears, click it, then click to
select the Azure CLI task in the task list for the stage.
6. In the Azure CLI settings pane, select your Azure subscription, set Script Location to Inline script,
and set the Inline Script to the following, then click Save and OK.
az webapp traffic-routing set –resource-group $(ResourceGroupName) –name $(WebsiteName) –distri-
bution staging=10
This distribution setting will cause 10% of the web traffic to be sent to the new feature Site (i.e., currently
the staging slot).
7. From the menu above the task list, click Variables. Create two new variables as shown. (Make sure to
use your correct website name).
393
8. From the menu above the variables, click Pipeline to return to editing the pipeline. Hover over the
Ring 0 (Canary) stage and click the Clone icon when it appears. Select the new stage and rename it
to Ring 1 (Early Adopters).
9. From the Tasks drop down list, select the Ring 1 (Early Adopters) stage, and select the Azure CLI
task. Modify the script by changing the value of 10 to 30 to cause 30% of the traffic to go to the new
feature site.
This allows us to move the new feature into wider distribution if it was working ok in the smaller set of
users.
10. From the menu above the tasks, click Pipeline to return to editing the release pipeline. Hover over the
Ring 1 (Early Adopters) stage and when the Clone icon appears, click it. Click to select the new stage
and rename it to Public. Click Save and OK.
11. Click the Pre-deployment conditions icon for the Ring 1 (Early Adopters) stage and add yourself as
a pre-deployment approver. Do the same for the Public stage. Click Save and OK.
394
The first step in letting the new code be released to the public, is to swap the new feature site (i.e., the
staging site) with the production, so that production is now running the new code.
12. From the Tasks drop down list, select the Public stage. Select the Azure CLI task, change the Display
name to Swap sites and change the Inline Script to the following:
az webapp deployment slot swap -g $(ResourceGroupName) -n $(WebsiteName) –slot staging –tar-
get-slot production
At this point, 10% of the traffic will be going to the new feature site.
16. Click Approve on the Ring 1 (Early Adopters) stage, and then Approve.
When this stage completes, 30% of the traffic will now be going to the early adopters in ring 1.
Lab
Lab 12: Feature flag management with Launch-
Darkly and Azure DevOps
Lab overview
LaunchDarkly9 is a continuous delivery platform that provides feature flags as a service. LaunchDarkly
gives you the power to separate feature rollout from code deployment and manage feature flags at scale.
Integration of LaunchDarkly with Azure DevOps minimizes potential risks associated with frequent
releases. To further integrate releases with your development process, you can link feature flag roll-outs
to Azure DevOps work items.
In this lab, you will learn how to optimize management of feature flags in Azure DevOps by leveraging
LaunchDarkly.
Objectives
After you complete this lab, you will be able to:
●● Create feature flags in LaunchDarkly
●● Integrate LaunchDarkly with Web applications
●● Automatically roll-out LaunchDarkly feature flags as part of Azure DevOps release pipelines
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions10
9 https://launchdarkly.com/
10 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
397
Review Question 2
What Azure-based tool can you use to divert a percentage of your web traffic to a newer version of an
Azure website?
Review Question 3
What characteristics make users suitable for working with canary deployments?
Review Question 4
What is a potential disadvantage of using canary deployments?
Review Question 5
Apart from the traffic routing method, what else does Azure Traffic Manager consider when making routing
decisions?
398
Answers
What is the easiest way to create a staging environment for an Azure webapp?
What Azure-based tool can you use to divert a percentage of your web traffic to a newer version of an
Azure website?
What characteristics make users suitable for working with canary deployments?
Needing to look after multiple versions of code at the same time. Or, the users might not be the right ones
to test changes in the particular deployment.
Apart from the traffic routing method, what else does Azure Traffic Manager consider when making
routing decisions?
Health of the end point. (It includes built-in endpoint monitoring and automatic endpoint failover)
Module 13 Managing Infrastructure and Con-
figuration using Azure Tools
Module overview
Module overview
“Infrastructure as code” (IaC) doesn't quite trip off the tongue, and its meaning isn't always clear. But IaC
has been with us since the beginning of DevOps—and some experts say DevOps wouldn't be possible
without it.
As the name suggests, infrastructure as code is the concept of managing your operations environment in
the same way you do applications or other code for general release. Rather than manually making config-
uration changes or using one-off scripts to make infrastructure adjustments, the operations infrastructure
is managed instead using the same rules and strictures that govern code development—particularly
when new server instances are spun up.
That means that the core best practices of DevOps—like version control, virtualized tests, and continuous
monitoring—are applied to the underlying code that governs the creation and management of your
infrastructure. In other words, your infrastructure is treated the same way that any other code would be.
The elasticity of the cloud paradigm and disposability of cloud machines can only truly be leveraged by
applying the principles of Infrastructure as Code to all your infrastructure.
Learning Objectives
After completing this module, students will be able to:
●● Apply infrastructure and configuration as code principles
●● Deploy and manage infrastructure using Microsoft automation technologies such as ARM templates,
PowerShell, and Azure CLI
400
Environment configuration
Configuration management refers to automated management of configuration, typically in the form of
version-controlled scripts, for an application and all the environments needed to support it. Configuration
management means lighter-weight, executable configurations that allow us to have configuration and
environments as code.
For example, adding a new port to a Firewall, could be done by editing a text file and running the release
pipeline, not by remoting into the environment and manually adding the port.
Note: The term configuration as code can also be used to mean configuration management, however, is
not used as widely, and in some cases, infrastructure as code is used to describe both provisioning and
configuring machines. The term infrastructure as code is also sometimes used to include configuration as
code, but not vice versa.
●● Imperative (procedural). In the imperative approach, the script states the how for the final state of the
machine by executing the steps to get to the finished state. It defines what the final state needs to be,
but also includes how to achieve that final state. It also can include coding concepts such as for,
*if-then, loops, and matrices.
403
Best practices
The declarative approach abstracts away the methodology of how a state is achieved. As such, it can be
easier to read and understand what is being done. It also makes it easier to write and define. Declarative
approaches also separate out the final desired state, and the coding required to achieve that state. Thus,
it does not force you to use a particular approach, which allows for optimization where possible.
A declarative approach would generally be the preferred option where ease of use is the main goal.
Azure Resource Manager template files are an example of a declarative automation approach.
An imperative approach may have some advantages where there are complex scenarios where changes
in the environment take place relatively frequently, which need to be accounted for in your code.
There is no absolute on which is the best approach to take, and individual tools may be able to be used
in either declarative or imperative forms. The best approach for you to take will depend on your needs.
Idempotent configuration
Idempotence is a mathematical term that can be used in the context of Infrastructure as Code and
Configuration as Code. It is the ability to apply one or more operations against a resource, resulting in
the same outcome.
For example, if you run a script on a system it should have the same outcome regardless of the number
of times you execute the script. It should not error out or perform duplicate actions regardless of the
environment’s starting state.
In essence, if you apply a deployment to a set of resources 1,000 times, you should end up with the same
result after each application of the script or template.
404
1 https://www.wintellect.com/idempotency-for-windows-azure-message-queues/
405
Template components
Azure Resource Manager templates are written in JSON, which allows you to express data stored as an
object (such as a virtual machine) in text. A JSON document is essentially a collection of key-value pairs.
Each key is a string that's value can be:
●● A string
●● A number
●● A Boolean expression
●● A list of values
●● An object (which is a collection of other key-value pairs)
406
A Resource Manager template can contain sections that are expressed using JSON notation, but are not
related to the JSON language itself:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/
deploymentTemplate.json#",
"contentVersion": "",
"parameters": { },
"variables": { },
"functions": [ ],
"resources": [ ],
"outputs": { }
}
Parameters
This section is where you specify which values are configurable when the template runs. For example, you
might allow template users to specify a username, password, or domain name.
Here's an example that illustrates two parameters: one for a virtual machine's (VM's) username, and one
for its password:
"parameters": {
"adminUsername": {
"type": "string",
"metadata": {
"description": "Username for the Virtual Machine."
}
},
"adminPassword": {
"type": "securestring",
"metadata": {
"description": "Password for the Virtual Machine."
}
}
}
Variables
This section is where you define values that are used throughout the template. Variables can help make
your templates easier to maintain. For example, you might define a storage account name one time as a
variable, and then use that variable throughout the template. If the storage account name changes, you
need only update the variable once.
Here's an example that illustrates a few variables that describe networking features for a VM:
"variables": {
"nicName": "myVMNic",
"addressPrefix": "10.0.0.0/16",
"subnetName": "Subnet",
407
"subnetPrefix": "10.0.0.0/24",
"publicIPAddressName": "myPublicIP",
"virtualNetworkName": "MyVNET"
}
Functions
This section is where you define procedures that you don't want to repeat throughout the template. Like
variables, functions can help make your templates easier to maintain.
Here's an example that creates a function for creating a unique name to use when creating resources that
have globally unique naming requirements:
"functions": [
{
"namespace": "contoso",
"members": {
"uniqueName": {
"parameters": [
{
"name": "namePrefix",
"type": "string"
}
],
"output": {
"type": "string",
"value": "[concat(toLower(parameters('namePrefix')), uniqueS-
tring(resourceGroup().id))]"
}
}
}
}
],
Resources
This section is where you define the Azure resources that make up your deployment.
Here's an example that creates a public IP address resource:
{
"type": "Microsoft.Network/publicIPAddresses",
"name": "[variables('publicIPAddressName')]",
"location": "[parameters('location')]",
"apiVersion": "2018-08-01",
"properties": {
"publicIPAllocationMethod": "Dynamic",
"dnsSettings": {
"domainNameLabel": "[parameters('dnsLabelPrefix')]"
}
}
408
Here, the type of resource is Microsoft.Network/publicIPAddresses. The name is read from the
variables section, and the location, or Azure region, is read from the parameters section.
Because resource types can change over time, apiVersion refers to the version of the resource type
you want to use. As resource types evolve, you can modify your templates to work with the latest fea-
tures.
Outputs
This section is where you define any information you'd like to receive when the template runs. For
example, you might want to receive your VM's IP address or fully qualified domain name (FQDN),
information you will not know until the deployment runs.
Here's an example that illustrates an output named hostname. The FQDN value is read from the VM's
public IP address settings:
"outputs": {
"hostname": {
"type": "string",
"value": "[reference(variables('publicIPAddressName')).dnsSettings.
fqdn]"
}
}
Manage dependencies
For any given resource, other resources might need to exist before you can deploy the resource. For
example, a Microsoft SQL Server must exist before attempting to deploy a SQL Database. You can define
this relationship by marking one resource as dependent on the other. You define a dependency with the
dependsOn element, or by using the reference function.
Resource Manager evaluates the dependencies between resources and deploys them in their dependent
order. When resources aren't dependent on each other, Resource Manager deploys them in parallel. You
only need to define dependencies for resources that are deployed in the same template.
Circular dependencies
A circular dependency is when there is a problem with dependency sequencing, resulting in the deploy-
ment going around in a loop and unable to proceed. As a result, Resource Manager cannot deploy the
resources. Resource Manager identifies circular dependencies during template validation. If you receive
an error stating that a circular dependency exists, evaluate your template to find whether any dependen-
cies are not needed and can be removed.
If removing dependencies doesn't resolve the issue, you can move some deployment operations into
child resources that are deployed after the resources with the circular dependency.
Modularize templates
When using Azure Resource Manager templates, a best practice is to modularize them by breaking them
out into the individual components. The primary methodology to use to do this is by using linked
templates. These allow you to break out the solution into targeted components, and then reuse those
various elements across different deployments.
Linked template
To link one template to another, add a deployment's resource to your main template.
"resources": [
{
"apiVersion": "2017-05-10",
"name": "linkedTemplate",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
<link-to-external-template>
}
}
]
Nested template
You can also nest a template within the main template, use the template property, and specify the
template syntax. This does aid somewhat in the context of modularization but dividing up the various
components can result in a large main file, as all the elements are within that single file.
"resources": [
{
"apiVersion": "2017-05-10",
"name": "nestedTemplate",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/
deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
410
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageName')]",
"apiVersion": "2015-06-15",
"location": "West US",
"properties": {
"accountType": "Standard_LRS"
}
}
]
}
}
}
]
Note: For nested templates, you cannot use parameters or variables that are defined within the nested
template itself. You can only use parameters and variables from the main template.
The properties you provide for the deployment resource will vary based on whether you're linking to an
external template or nesting an inline template within the main template.
Deployments modes
When deploying your resources using templates, you have three options:
●● validate. This option compiles the templates, validates the deployment, ensures the template is
functional (for example, no circular dependencies), and the syntax is correct.
●● incremental mode (default). This option only deploys whatever is defined in the template. It does
not remove or modify any resources that are not defined in the template. For example, if you have
deployed a VM via template and then renamed the VM in the template, the first VM deployed will
remain after the template is run again. This is the default mode.
●● complete mode: Resource Manager deletes resources that exist in the resource group but aren't
specified in the template. For example, only resources defined in the template will be present in the
resource group after the template deploys. As best practice use this mode for production environ-
ments where possible to try to achieve idempotency in your deployment templates.
When deploying with PowerShell, to set the deployment mode use the Mode parameter, as per the
nested template example earlier in this topic.
Note: As a best practice, use one resource group per deployment.
Note: For both linked and nested templates, you can only use incremental deployment mode.
You can also provide the parameter inline. However, you can't use both inline parameters and a link to a
parameter file. The following example uses the templateLink parameter:
"resources": [
{
"name": "linkedTemplate",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2018-05-01",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri":"https://linkedtemplateek1store.blob.core.windows.net/
linkedtemplates/linkedStorageAccount.json?sv=2018-03-28&sr=b&sig=dO9p7Xnbh-
Gq56BO%2BSW3o9tX7E2WUdIk%2BpF1MTK2eFfs%3D&se=2018-12-31T14%3A32%3A29Z&sp=r"
},
"parameters": {
"storageAccountName":{"value": "[variables('storageAccount-
Name')]"},
"location":{"value": "[parameters('location')]"}
}
}
},
keyVaultName='{your-unique-vault-name}'
resourceGroupName='{your-resource-group-name}'
location='centralus'
userPrincipalName='{your-email-address-associated-with-your-subscription}'
The following template (also available at GitHub - sqlserver.json2) deploys a SQL database that includes
an administrator password. The password parameter is set to a secure string. However, the template does
not specify where that value comes from:
2 https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/keyvaultparameter/sqlserver.json
413
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/
deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminLogin": {
"type": "string"
},
"adminPassword": {
"type": "securestring"
},
"sqlServerName": {
"type": "string"
}
},
"resources": [
{
"name": "[parameters('sqlServerName')]",
"type": "Microsoft.Sql/servers",
"apiVersion": "2015-05-01-preview",
"location": "[resourceGroup().location]",
"tags": {},
"properties": {
"administratorLogin": "[parameters('adminLogin')]",
"administratorLoginPassword": "[parameters('adminPassword')]",
"version": "12.0"
}
}
],
"outputs": {
}
}
Now you can create a parameter file for the preceding template. In the parameter file, specify a parame-
ter that matches the name of the parameter in the template. For the parameter value, reference the secret
from the Key Vault. You reference the secret by passing the resource identifier of the Key Vault and the
name of the secret. In the following parameter file (also available at GitHub - keyvaultparameter3), the
Key Vault secret must already exist, and you provide a static value for its resource ID.
Copy this file locally, and set the subscription ID, vault name, and SQL server name:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/
deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminLogin": {
"value": "exampleadmin"
},
"adminPassword": {
3 https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/keyvaultparameter/sqlserver.parameters.json
414
"reference": {
"keyVault": {
"id": "/subscriptions/<subscription-id>/resourceGroups/
examplegroup/providers/Microsoft.KeyVault/vaults/<vault-name>"
},
"secretName": "examplesecret"
}
},
"sqlServerName": {
"value": "<your-server-name>"
}
}
}
All you would need to do now is deploy the template and pass in the parameter file to the template.
For more information, go to Use Azure Key Vault to pass secure parameter value during deployment4
for more details. There also are details available on this webpage for how to reference a secret with a
dynamic ID as well.
4 https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-keyvault-parameter
415
Azure CLI provides cross-platform command-line tools for managing Azure resources. You can install this
locally on computers running the Linux, macOS, or Windows operating systems. You can also use Azure
CLI from a browser through Azure Cloud Shell.
In both cases, you can use Azure CLI interactively or through scripts:
●● Interactive. For Windows operating systems, launch a shell such as cmd.exe, or for Linux or macOS,
use Bash. Then issue the command at the shell prompt.
●● Scripted. Assemble the Azure CLI commands into a shell script using the script syntax of your chosen
shell, and then execute the script.
If you already know the name of the command you want, the help argument for that command will get
you more detailed information on the command, and for a command group, a list of the available
subcommands. For example, here's how you would get a list of the subgroups and commands for
managing blob storage:
az storage blob --help
Creating resources
When creating a new Azure resource, typically there are three high-level steps:
1. Connect to your Azure subscription.
2. Create the resource.
3. Verify that creation was successful.
416
1. Connect
Because you're working with a local Azure CLI installation, you'll need to authenticate before you can
execute Azure commands. You do this by using the Azure CLI login command:
az login
Azure CLI will typically launch your default browser to open the Azure sign-in page. If this doesn't work,
follow the command-line instructions, and enter an authorization code in the [Enter Code] (https://aka.
ms/devicelogin) Enter Code5 dialog box.
After a successful sign in, you'll be connected to your Azure subscription.
2. Create
You'll often need to create a new resource group before you create a new Azure service, so we'll use
resource groups as an example to show how to create Azure resources from the Azure CLI.
The Azure CLI group create command creates a resource group. You must specify a name and location.
The name parameter must be unique within your subscription. The location parameter determines where
the metadata for your resource group will be stored. You use strings like “West US”, "North Europe", or
“West India” to specify the location. Alternatively, you can use single word equivalents, such as "westus",
“northeurope”, or "westindia".
The core syntax to create a resource group is:
az group create --name <name> --location <location>
3. Verify installation
For many Azure resources, Azure CLI provides a list subcommand to get resource details. For example,
the Azure CLI group list command lists your Azure resource groups. This is useful to verify whether
resource group creation was successful:
az group list
To get more concise information, you can format the output as a simple table:
az group list --output table
If you have several items in the group list, you can filter the return values by adding a query option using,
for example, the following command:
az group list --query "[?name == '<rg name>']"
5 https://aka.ms/devicelogin
417
Note: You format the query using JMESPath, which is a standard query language for JSON requests. You
can learn more about this filter language at http://jmespath.org/6
If you use a PowerShell environment for running Azure CLI scripts, you'll need to use the following syntax
for variables:
$variable="value"
$variable=integer
Steps
In the following steps we will deploy the template and verify the result using Azure CLI:
1. Create a resource group to deploy your resources to, by running the following command:
az group create --name <resource group name> --location <your nearest
datacenter>
2. From Cloud Shell, run the curl command to download the template you used previously from GitHub:
curl https://raw.githubusercontent.com/Microsoft/PartsUnlimited/master/
Labfiles/AZ-400T05_Implementing_Application_Infrastructure/M01/azuredeploy.
json > C:\temp\azuredeploy.json
3. Validate the template by running the following command, substituting the values with your own:
az deployment group validate \
--resource-group <rgn>[sandbox resource group name]</rgn> \
--template-file C:\temp\azuredeploy.json \
--parameters adminUsername=$USERNAME \
--parameters adminPassword=$PASSWORD \
--parameters dnsLabelPrefix=$DNS_LABEL_PREFIX
6 http://jmespath.org/
7 https://azure.microsoft.com/en-us/free/
418
4. Deploy the resource by running the following command, substituting the same values as earlier:
az deployment group create \
--name MyDeployment \
--resource-group <rgn>[sandbox resource group name]</rgn> \
--template-file azuredeploy.json \
--parameters adminUsername=$USERNAME \
--parameters adminPassword=$PASSWORD \
--parameters dnsLabelPrefix=$DNS_LABEL_PREFIX
6. Run curl to access your web server and verify that the deployment and running of the custom script
extension was successful:
curl $IPADDRESS
✔️ Note: Don't forget to delete any resources you deployed to avoid incurring additional costs from
them.
419
Azure Automation is not the only way to automate within Azure. You can also use open-source tools to
perform some of these operations. However, the integration hooks available to Azure Automation
remove much of the integration complexity that you would have to manage if you performed these
operations manually.
Some Azure Automation capabilities are:
●● Process automation. Azure Automation provides you with the ability to automate frequent, time-con-
suming, and error-prone cloud management tasks.
●● Azure Automation State Configuration. This is an Azure service that allows you to write, manage, and
compile Windows PowerShell DSC configurations, import DSC Resources, and assign configurations to
target nodes, all in the cloud. For more information, visit Azure Automation State Configuration
Overview9.
●● Update management. Manage operating system updates for Windows and Linux computers in Azure,
in on-premises environments, or in other cloud providers. Get update compliance visibility across
Azure, on-premises, and for other cloud services. You can create scheduled deployments to orches-
trate update installations within a defined maintenance window. For more information, visit Update
Management solution in Azure10.
●● Start and stop virtual machines (VMs). Azure Automation provides an integrated Start/Stop VM–relat-
ed resource that enables you to start and stop VMs on user-defined schedules. It also provides
insights through Azure Log Analytics and can send emails by using action groups. For more informa-
tion, go to Start/Stop VMs during off-hours solution in Azure Automation11.
8 https://azure.microsoft.com/en-us/documentation/articles/automation-intro/
9 https://docs.microsoft.com/en-us/azure/automation/automation-dsc-overview
10 https://docs.microsoft.com/en-us/azure/automation/automation-update-management
11 https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management
420
●● Integration with GitHub, Azure DevOps, Git, or Team Foundation Version Control (TFVC) repositories.
For more information, go to Source control integration in Azure Automation12
●● Automate Amazon Web Services (AWS) Resources. Automate common tasks with resources in AWS
using Automation runbooks in Azure. For more information, go to Authenticate Runbooks with
Amazon Web Services13.
●● Manage Shared resources. Azure Automation consists of a set of shared resources (such as connec-
tions, credentials, modules, schedules, and variables) that make it easier to automate and configure
your environments at scale.
●● Run backups. Azure Automation allows you to run regular backups of non-database systems, such as
backing up Azure Blob Storage at certain intervals.
Azure Automation works across hybrid cloud environments in addition to Windows and Linux operating
systems.
Automation accounts
To start using the Microsoft Azure Automation service, you must first create an Automation account14
from within the Azure portal. Steps to create an Azure Automation account are available on the Create
an Azure Automation account15 page.
Automation accounts are like Azure Storage accounts in that they serve as a container to store automa-
tion artifacts. These artifacts could be a container for all your runbooks, runbook executions (jobs), and
the assets on which your runbooks depend.
An Automation account gives you access to managing all Azure resources via an API. To safeguard this,
the Automation account creation requires subscription-owner access.
12 https://docs.microsoft.com/en-us/azure/automation/source-control-integration
13 https://docs.microsoft.com/en-us/azure/automation/automation-config-aws-account
14 https://azure.microsoft.com/en-us/documentation/articles/automation-security-overview/
15 https://docs.microsoft.com/en-us/azure/automation/automation-quickstart-create-account
421
You must be a subscription owner to create the Run As accounts that the service creates.
If you do not have the proper subscription privileges, you will see the following warning:
To use Azure Automation, you will need at least one Azure Automation account. However, as a best
practice you should create multiple automation accounts to segregate and limit the scope of access and
minimize any risk to your organization. For example, you might use one account for development,
another for production, and another for your on-premises environment. You can have up to 30 Automa-
tion accounts.
What is a runbook ?
Runbooks serve as repositories for your custom scripts and workflows. They also typically reference
Automation shared resources such as credentials, variables, connections, and certificates. Runbooks can
422
also contain other runbooks, thereby allowing you to build more complex workflows. You can invoke and
run runbooks either on demand, or according to a schedule by leveraging Automation Schedule assets.
Creating runbooks
When creating runbooks, you have two options. You can either:
●● Create your own runbook and import it. For more information about creating or importing a runbook
in Azure Automation, go to Manage runbooks in Azure Automation16.
●● Modify runbooks from the runbook gallery. This provides a rich ecosystem of runbooks that are
available for your requirements. Visit Runbook and module galleries for Azure Automation17 for
more information.
There is also a vibrant open-source community that creates runbooks you can apply directly to your use
cases.
You can choose from different runbook types based on your requirements and Windows PowerShell
experience. If you prefer to work directly with Windows PowerShell code, you can use either a PowerShell
runbook, or a PowerShell Workflow runbook. Using either of these you can edit offline or with the textual
editor in the Azure portal. If you prefer to edit a runbook without being exposed to the underlying code,
you can create a graphical runbook by using the graphical editor in the Azure portal.
Graphical runbooks
Graphical runbooks and Graphical PowerShell Workflow runbooks are created and edited with the graphi-
cal editor in the Azure portal. You can export them to a file and then import them into another automa-
tion account, but you cannot create or edit them with another tool.
16 https://docs.microsoft.com/en-us/azure/automation/automation-creating-importing-runbook
17 https://docs.microsoft.com/en-us/azure/automation/automation-runbook-gallery
423
PowerShell runbooks
PowerShell runbooks are based on Windows PowerShell. You edit the runbook code directly, using the
text editor in the Azure portal. You can also use any offline text editor and then import the runbook into
Azure Automation. PowerShell runbooks do not use parallel processing.
Python runbooks
Python runbooks compile under Python 2. You can directly edit the code of the runbook using the text
editor in the Azure portal, or you can use any offline text editor and import the runbook into Azure
Automation. You can also utilize Python libraries. However, only Python 2 is supported at this time. To
utilize third-party libraries, you must first import the package into the Automation Account.
✔️ Note: You can't convert runbooks from graphical to textual type, or vice versa.
For more information on the different types of runbooks, visit Azure Automation runbook types18.
18 https://azure.microsoft.com/en-us/documentation/articles/automation-runbook-types
424
As a best practice, always try to create global assets so they can be used across your runbooks. This will
save time and reduce the number of manual edits within individual runbooks.
Runbook Gallery
Azure Automation runbooks are provided to help eliminate the time it takes to build custom solutions.
These runbooks have already been built by Microsoft and the Microsoft community, and you can use
them with or without modification. You can import runbooks from the runbook gallery at the Microsoft
Script Center, Script resources for IT professionals19 webpage.
✔️ Note: A new Azure PowerShell module was released in December 2018, called the Az PowerShell
module. This replaces the existing AzureRM PowerShell module, and is now the intended PowerShell
module for interacting with Azure. This new Az module is now supported in Azure Automation. For more
general details on the new Az PowerShell module, go to Introducing the new Azure PowerShell Az
module20.
19 https://gallery.technet.microsoft.com/scriptcenter/site/search?f[0].Type=RootCategory&f[0].Value=WindowsAzure&f[1].
Type=SubCategory&f[1].Value=WindowsAzure_automation&f[1].Text=Automation
20 https://docs.microsoft.com/en-us/powershell/azure/new-azureps-module-az?view=azps-2.7.0
425
description, ratings, and questions and answers. For more information, refer to Script resources for IT
professionals21.
✔️ Note: Python runbooks are also available from the script center gallery. To find them, filter by lan-
guage and select Python.
✔️ Note: You cannot use PowerShell to import directly from the runbook gallery.
Webhooks
You can automate the process of starting a runbook either by scheduling it, or by using a webhook.
A webhook allows you to start a particular runbook in Azure Automation through a single HTTPS
request. This allows external services such as Azure DevOps, GitHub, or custom applications to start
runbooks without implementing more complex solutions using the Azure Automation API
(More information about webhooks is available at Starting an Azure Automation runbook with a
webhook22.)
21 https://gallery.technet.microsoft.com/scriptcenter
22 https://docs.microsoft.com/en-us/azure/automation/automation-webhooks
426
Create a webhook
You create a webhook linked to a runbook using the following steps:
1. In the Azure portal, open the runbook that you want to create the webhook for.
2. In the runbook pane, under Resources, select Webhooks, and then select + Add webhook.
3. Select Create new webhook.
4. In the Create new webhook dialog, there are several values you need to configure. After you config-
ure them, select Create:
●● Name. Specify any name you want for a webhook, because the name is not exposed to the client;
it's only used for you to identify the runbook in Azure Automation.
●● Enabled. A webhook is enabled by default when it is created. If you set it to Disabled, then no
client can use it.
●● Expires. Each webhook has an expiration date at which time it can no longer be used. You can
continue to modify the date after creating the webhook providing the webhook is not expired.
●● URL. The URL of the webhook is the unique address that a client calls with an HTTP POST, to start
the runbook linked to the webhook. It is automatically generated when you create the webhook,
and you cannot specify a custom URL. The URL contains a security token that allows the runbook
to be invoked by a third-party system with no further authentication. For this reason, treat it like a
password. For security reasons, you can only view the URL in the Azure portal at the time the
webhook is created. Make note of the URL in a secure location for future use.
427
✔️ Note: Make sure you make a copy of the webhook URL when creating it, and then store it in a safe
place. After you create the webhook, you cannot retrieve the URL again.
4. Select the Parameters run settings (Default : Azure) option. This option has the following charac-
teristics, which allows you to complete the following actions:
●● If the runbook has mandatory parameters, you will need to provide these mandatory parameters
during creation. You are not able to create the webhook unless values are provided.
●● If there are no mandatory parameters in the runbook, there is no configuration required here.
●● The webhook must include values for any mandatory parameters of the runbook but could also
include values for optional parameters.
●● When a client starts a runbook using a webhook, it cannot override the parameter values defined
in the webhook.
●● To receive data from the client, the runbook can accept a single parameter called $WebhookData
of type [object] that contains data that the client includes in the POST request.
●● There is no required webhook configuration to support the $WebhookData parameter.
428
Using a webhook
To use a webhook after it has been created, your client application must issue an HTTP POST with the
URL for the webhook.
●● The syntax of the webhook is in the following format:
http://< Webhook Server >/token?=< Token Value >
●● The client receives one of the following return codes from the POST request.
The response will contain a single job ID, but the JSON format allows for potential future enhancements.
●● You cannot determine when the runbook job completes or determine its completion status from the
webhook. You can only determine this information using the job ID with another method such as
PowerShell or the Azure Automation API.
More details are available on the Starting an Azure Automation runbook with a webhook23 page.
23 https://docs.microsoft.com/en-us/azure/automation/automation-webhooks#details-of-a-webhook
430
If successful you should receive an email notification from GitHub stating that A third-party OAuth
Application (Automation Source Control) with repo scope was recently authorized to access your
account.
5. After authentication completes, fill in the details based on the following table, and then select Save.
Property Description
Name Friendly name
Source control type GitHub, Azure DevOps Git or Azure Devops TFVC
Repository The name of the repository or project
Branch The branch from which to pull the source files.
Branch targeting is not available for the TFVC
source control type
Folder Path The folder that contains the runbooks to sync.
Auto sync Turns on or off automatic sync when a commit is
made in the source control repository
Publish Runbook If set to On, after runbooks are synced from
source control, they will be automatically pub-
lished
Description A text field to provide additional details
6. If you set Autosync to Yes, a full sync will start. If you set Auto sync to No, open the Source Control
Summary blade again by selecting your repository in Azure Automation, and then selecting Start
Sync.
431
7. Verify that your source control is listed in the Azure Automation Source control page for you to use.
PowerShell workflows
IT pros often automate management tasks for their multi-device environments by running sequences of
long-running tasks or workflows. These tasks can affect multiple managed computers or devices at the
same time. PowerShell Workflow lets IT pros and developers leverage the benefits of Windows Workflow
Foundation with the automation capabilities and ease of using Windows PowerShell. Refer to A Develop-
er's Introduction to Windows Workflow Foundation (WF) in .NET 424 for more information.
Windows PowerShell Workflow functionality was introduced in Windows Server 2012 and Windows 8 and
is part of Windows PowerShell 3.0 and later. Windows PowerShell Workflow helps automate distribution,
orchestration, and completion of multi-device tasks, freeing users and administrators to focus on high-
er-level tasks.
Activities
An activity is a specific task that you want a workflow to perform. Just as a script is composed of one or
more commands, a workflow is composed of one or more activities that are carried out in sequence. You
can also use a script as a single command in another script and use a workflow as an activity within
another workflow.
Workflow characteristics
A workflow can:
●● Be long-running.
●● Be repeated over and over.
●● Run tasks in parallel.
●● Be interrupted—can be stopped and restarted, suspended, and resumed.
●● Continue after an unexpected interruption, such as a network outage or computer/server restart.
Workflow benefits
A workflow offers many benefits, including:
●● Windows PowerShell scripting syntax. Is built on PowerShell.
●● Multidevice management. Simultaneously apply workflow tasks to hundreds of managed nodes.
24 https://docs.microsoft.com/en-us/previous-versions/dotnet/articles/ee342461(v=msdn.10)
432
●● Single task runs multiple scripts and commands. Combine related scripts and commands into a single
task. Then run the single task on multiple computes. The activity status and progress within the
workflow are visible at any time.
●● Automated failure recovery.
●● Workflows survive both planned and unplanned interruptions, such as computer restarts.
●● You can suspend a workflow operation, then restart or resume the workflow from the point at
which it was suspended.
●● You can author checkpoints as part of your workflow, so that you can resume the workflow from
the last persisted task (or checkpoint) instead of restarting the workflow from the beginning.
●● Connection and activity retries. You can retry connections to managed nodes if network-connection
failures occur. Workflow authors can also specify activities that must run again if the activity cannot be
completed on one or more managed nodes (for example, if a target computer was offline while the
activity was running).
●● Connect and disconnect from workflows. Users can connect and disconnect from the computer that is
running the workflow, but the workflow will remain running. For example, if you are running the
workflow and managing the workflow on two different computers, you can sign out of or restart the
computer from which you are managing the workflow and continue to monitor workflow operations
from another computer without interrupting the workflow.
●● Task scheduling. You can schedule a task to start when specific conditions are met, as with any other
Windows PowerShell cmdlet or script.
Creating a workflow
To write the workflow, use a script editor such as the Windows PowerShell Integrated Scripting Environ-
ment (ISE). This enforces workflow syntax and highlights syntax errors. For more information, review the
tutorial My first PowerShell Workflow runbook25.
A benefit of using PowerShell ISE is that it automatically compiles your code and allows you to save the
artifact. Because the syntactic differences between scripts and workflows are significant, a tool that knows
both workflows and scripts will save you significant coding and testing time.
Syntax
When you create your workflow, begin with the workflow keyword, which identifies a workflow com-
mand to PowerShell. A script workflow requires the workflow keyword. Next, name the workflow, and
have it follow the workflow keyword. The body of the workflow will be enclosed in braces.
A workflow is a Windows command type, so select a name with a verb-noun format:
workflow Test-Workflow
{
...
}
25 https://azure.microsoft.com/en-us/documentation/articles/automation-first-runbook-textual/
433
To add parameters to a workflow, use the Param keyword. These are the same techniques that you use to
add parameters to a function.
Finally, add your standard PowerShell commands.
workflow MyFirstRunbook-Workflow
{
Param(
[string]$VMName,
[string]$ResourceGroupName
)
....
Start-AzureRmVM -Name $VMName -ResourceGroupName $ResourceGroupName
}
Prerequisites
●● Note: You require an Azure subscription to perform the following steps. If you don't have one you can
create one by following the steps outlined on the Create your Azure free account today26 webpage.
Steps
26 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
434
For this walkthrough, you'll use the type directly into the runbook method, as detailed in the following
steps:
1. Type Write-Output "Hello World." between the braces, as per the below:
Workflow MyFirstRunbook-Workflow
{
Write-Output "Hello World"
}
2. Select Start to start the test. This should be the only enabled option.
435
A runbook job is created, and its status displayed. The job status will start as Queued indicating that it is
waiting for a runbook worker in the cloud to come available. It moves to Starting when a worker claims
the job, and then Running when the runbook starts running. When the runbook job completes, its output
displays. In your case, you should see Hello World.
3. When the runbook job finishes, close the Test pane.
436
5. You just want to start the runbook, so select Start, and then when prompted, select Yes.
6. When the job pane opens for the runbook job that you created, leave it open so you can watch the
job's progress.
7. Verify that at when the job completes, the job statuses that display in Job Summary match the
statuses that you saw when you tested the runbook.
437
Checkpoints
A checkpoint is a snapshot of the current state of the workflow. Checkpoints include the current value
for variables, and any output generated up to that point. (For more information on what a checkpoint is,
read the checkpoint27 webpage.)
If a workflow ends in an error or is suspended, the next time it runs it will start from its last checkpoint,
instead of at the beginning of the workflow. You can set a checkpoint in a workflow with the Check-
point-Workflow activity.
For example, in the following sample code if an exception occurs after Activity2, the workflow will end.
When the workflow is run again, it starts with Activity2 because this followed just after the last checkpoint
set.
27 https://docs.microsoft.com/en-us/azure/automation/automation-powershell-workflow#checkpoints
438
<Activity1>
Checkpoint-Workflow
<Activity2>
<Exception>
<Activity3>
Parallel processing
A script block has multiple commands that run concurrently (or in parallel) instead of sequentially, as for
a typical script. This is referred to as parallel processing. (More information about parallel processing is
available on the Parallel processing28 webpage.)
In the following example, two vm0 and vm1 VMs will be started concurrently, and vm2 will only start after
vm0 and vm1 have started.
Parallel
{
Start-AzureRmVM -Name $vm0 -ResourceGroupName $rg
Start-AzureRmVM -Name $vm1 -ResourceGroupName $rg
}
Another parallel processing example would be the following constructs that introduce some additional
options:
●● ForEach -Parallel. You can use the ForEach -Parallel construct to concurrently process commands for
each item in a collection. The items in the collection are processed in parallel while the commands in
the script block run sequentially.
In the following example, Activity1 starts at the same time for all items in the collection. For each item,
Activity2 starts after Activity1 completes. Activity3 starts only after both Activity1 and Activity2 have
completed for all items.
●● ThrottleLimit - We use the ThrottleLimit parameter to limit parallelism. Too high of a ThrottleLimit can
cause problems. The ideal value for the ThrottleLimit parameter depends on several environmental
factors. Try start with a low ThrottleLimit value, and then increase the value until you find one that
works for your specific circumstances:
ForEach -Parallel -ThrottleLimit 10 ($<item> in $<collection>)
{
<Activity1>
<Activity2>
}
<Activity3>
A real-world example of this could be similar to the following code, where a message displays for each
file after it is copied. Only after all files are completely copied does the final completion message display.
Workflow Copy-Files
{
28 https://docs.microsoft.com/en-us/azure/automation/automation-powershell-workflow#parallel-processing
439
$files = @("C:\LocalPath\File1.txt","C:\LocalPath\File2.txt","C:\
LocalPath\File3.txt")
Security considerations
Configuration drift can also introduce security vulnerabilities into your environment. For example:
●● Ports might be opened that should be kept closed.
●● Updates and security patches might not be applied across environments consistently.
●● Software might be installed that doesn't meet compliance requirements.
29 https://docs.microsoft.com/en-us/powershell/scripting/dsc/overview/overview?view=powershell-7
30 https://azure.microsoft.com/en-us/services/azure-policy/
441
Windows PowerShell DSC is a management platform in PowerShell that provides desired State. Power-
Shell DSC lets you manage, deploy, and enforce configurations for physical or virtual machines, including
Windows and Linux machines.
For more information, visit Windows PowerShell Desired State Configuration Overview31.
DSC components
DSC consists of three primary components:
●● Configurations. These are declarative PowerShell scripts that define and configure instances of
resources. Upon running the configuration, DSC (and the resources being called by the configuration)
will simply apply the configuration, ensuring that the system exists in the state laid out by the config-
uration. DSC configurations are also idempotent: The Local Configuration Manager (LCM) will continue
to ensure that machines are configured in whatever state the configuration declares.
●● Resources. They contain the code that puts and keeps the target of a configuration in the specified
state. Resources reside in PowerShell modules and can be written to a model as something as generic
as a file or a Windows process, or as specific as a Microsoft Internet Information Services (IIS) server or
a VM running in Azure.
●● Local Configuration Manager (LCM). The LCM runs on the nodes or machines you wish to configure.
This is the engine by which DSC facilitates the interaction between resources and configurations. The
LCM regularly polls the system using the control flow implemented by resources to ensure that the
state defined by a configuration is maintained. If the system is out of state, the LCM makes calls to the
code in resources to apply the configuration according to what has been defined
There are two methods of implementing DSC:
1. Push mode - Where a user actively applies a configuration to a target node, and the pushes out the
configuration.
2. Pull mode - Where pull clients are configured to get their desired state configurations from a remote
pull service automatically. This remote pull service is provided by a pull server which acts as a central
control and manager for the configurations, ensures that nodes conform to the desired state and
report back on their compliance status. The pull server can be set up as an SMB-based pull server or a
HTTPS-based server. HTTPS based pull-server use the Open Data Protocol (OData) with the OData
Web service to communicate using REST APIs. This is the model we are most interested in, as it can be
centrally managed and controlled. The diagram below provides an outline of the workflow of DSC pull
mode.
31 https://docs.microsoft.com/en-us/powershell/scripting/dsc/overview/overview?view=powershell-6
442
If you are not familiar with DSC, take some time to view A Practical Overview of Desired State Config-
uration32. This is a great video from the TechEd 2014 event, and it covers the basics of DSC.
32 https://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/DCIM-B417#fbid=
443
33 https://msdn.microsoft.com/en-us/library/aa823192(v=vs.85).aspx
34 https://docs.microsoft.com/en-us/powershell/scripting/dsc/configurations/configurations?view=powershell-6#configuration-syntax
444
●● Configuration block. The Configuration block is the outermost script block. In this case, the name of
the configuration is LabConfig. Notice the curly brackets to define the block.
●● Node block. There can be one or more Node blocks. These define the nodes (computers and VMs)
that you are configuring. In this example, the node targets a computer called WebServer. You could
also call it localhost and use it locally on any server.
●● Resource blocks. There can be one or more resource blocks. This is where the configuration sets the
properties for the resources. In this case, there is one resource block called WindowsFeature. Notice
the parameters that are defined. (You can read more about resource blocks at DSC resources35.
Here is another example:
Configuration MyDscConfiguration
{
param
(
[string[]]$ComputerName='localhost'
)
Node $ComputerName
{
WindowsFeature MyFeatureInstance
{
Ensure = 'Present'
Name = 'RSAT'
}
WindowsFeature My2ndFeatureInstance
{
Ensure = 'Present'
Name = 'Bitlocker'
}
}
}
MyDscConfiguration
In this example, you specify the name of the node by passing it as the ComputerName parameter when
you compile the configuration. The name defaults to "localhost".
Within a Configuration block, you can do almost anything that you normally could in a PowerShell
function. You can also create the configuration in any editor, such as PowerShell ISE, and then save the
file as a PowerShell script with a .ps1 file type extension.
35 https://docs.microsoft.com/en-us/powershell/scripting/dsc/resources/resources?view=powershell-7
36 https://azure.microsoft.com/en-us/documentation/articles/automation-dsc-compile/#compiling-a-dsc-configuration-with-the-azure-
portal
445
2. In Azure Automation, account under Configuration Management > State configuration (DSC),
select the Configurations tab, and then select +Add.
3. Point to the configuration file you want to import, and then select OK.
446
4. Once imported double click the file, select Compile, and then confirm by selecting Yes.
✔️ Note: If you prefer, you can also use the PowerShell Start-AzAutomationDscCompilationJob
cmdlet. More information about this method is available at Compiling a DSC Configuration with
Windows PowerShell37.
37 https://azure.microsoft.com/en-us/documentation/articles/automation-dsc-compile/#compiling-a-dsc-configuration-with-windows-
powershell
38 https://docs.microsoft.com/en-us/azure/automation/automation-dsc-onboarding#physicalvirtual-windows-machines-on-premises-or-in-
a-cloud-other-than-azureaws
448
5. In the resultant Registration pane, configure the following settings, and then select OK.
Property Description
Registration key Primary or secondary, for registering the node
with a pull service.
Node configuration name The name of the node configuration that the VM
should be configured to pull for Automation DSC
Refresh Frequency The time interval, in minutes, at which the LCM
checks a pull service to get updated configura-
tions. This value is ignored if the LCM is not
configured in pull mode. The default value is 30.
449
Property Description
Configuration Mode Frequency How often, in minutes, the current configuration is
checked and applied. This property is ignored if
the ConfigurationMode property is set to
ApplyOnly. The default value is 15.
Configuration mode Specifies how the LCM gets configurations.
Possible values are ApplyOnly,ApplyAndMonitor,
and ApplyAndAutoCorrect.
Allow Module Override Controls whether new configurations downloaded
from the Azure Automation DSC pull server are
allowed to overwrite the old modules already on
the target server.
Reboot Node if Needed Set this to $true to automatically reboot the node
after a configuration that requires reboot is
applied. Otherwise, you will have to manually
reboot the node for any configuration that
requires it. The default value is $false.
Action after Reboot Specifies what happens after a reboot during the
application of a configuration. The possible values
are ContinueConfiguration and StopConfigura-
tion.
The service will then connect to the Azure VMs and apply the configuration.
6. Return to the State configuration (DSC) pane and verify that after applying the configuration, the
status now displays as Compliant.
Each time that Azure Automation DSC performs a consistency check on a managed node, the node sends
a status report back to the pull server. You can review these reports on that node's blade. Access this by
double-clicking or pressing the spacebar and then Enter on the node.
450
✔️ Note: You can also unregister the node and assign a different configuration to nodes.
For more details about onboarding VMs, see also:
●● Enable Azure Automation State Configuration39
●● Configuring the Local Configuration Manager40
Hybrid management
The Hybrid Runbook Worker feature of Azure Automation allows you to run runbooks that manage local
resources in your private datacenter, on machines located in your datacenter. Azure Automation stores
and manages the runbooks, and then delivers them to one or more on-premises machines.
The Hybrid Runbook Worker functionality is presented in the following graphic:
39 https://docs.microsoft.com/en-us/azure/automation/automation-dsc-onboarding
40 https://docs.microsoft.com/en-us/powershell/scripting/dsc/managing-nodes/metaconfig?view=powershell-7
451
41 https://docs.microsoft.com/en-us/azure/automation/automation-dsc-onboarding
42 https://docs.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker#installing-hybrid-runbook-worker
43 https://azure.microsoft.com/en-us/blog/hybrid-management-in-azure-automation/
452
44 https://docs.microsoft.com/en-us/powershell/scripting/dsc/getting-started/lnxgettingstarted?view=powershell-7.1
453
Lab
Lab 13: Deployments using Azure Resource Man-
ager templates
Lab overview
In this lab, you will create an Azure Resource manager template and modularize it by using a linked
template. You will then modify the main deployment template to call the linked template and updated
dependencies, and finally deploy the templates to Azure.
Objectives
After you complete this lab, you will be able to:
●● Create Resource Manager template
●● Create a Linked template for storage resources
●● Upload Linked Template to Azure Blob Storage and generate SAS token
●● Modify the main template to call Linked template
●● Modify main template to update dependencies
●● Deploy resources to Azure using linked templates
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions45
45 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
454
Review Question 2
Which method of approach for implementing Infrastructure as Code states what the final state of an
environment should be without defining how it should be achieved?
Scripted
Imperative
Object-oriented
Declarative
Review Question 3
Which term defines the ability to apply one or more operations against a resource, resulting in the same
outcome every time?
Declarative
Idempotency
Configuration drift
Technical debt
Review Question 4
Which term is the process whereby a set of resources change their state over time from their original state in
which they were deployed?
Modularization
Technical debt
Configuration drift
Imperative
455
Review Question 5
Which Resource Manager deployment mode only deploys whatever is defined in the template, and does not
remove or modify any other resources not defined in the template?
Validate
Incremental
Complete
Partial
456
Answers
Review Question 1
What benefits from the list below can you achieve by modularizing your infrastructure and configuration
resources?
(Choose three)
■■ Easy to reuse across different environments
■■ Easier to manage and maintain your code
More difficult to sub-divide up work and ownership responsibilities
■■ Easier to troubleshoot
■■ Easier to extend and add to your existing infrastructure definitions
Explanation
The following answers are correct:
More difficult to sub-divide up work and ownership responsibilities is incorrect. It is easier to sub-divide up
work and ownership responsibilities.
Review Question 2
Which method of approach for implementing Infrastructure as Code states what the final state of an envi-
ronment should be without defining how it should be achieved?
Scripted
Imperative
Object-oriented
■■ Declarative
Explanation
Declarative is the correct answer. The declarative approach states what the final state should be. When run,
the script or definition will initialize or configure the machine to have the finished state that was declared,
without defining how that final state should be achieved.
All other answers are incorrect. Scripted is not a methodology, and in the imperative approach, the script
states the how for the final state of the machine by executing through the steps to get to the finished state.
It defines what the final state needs to be, but also includes how to achieve that final state.
Object-oriented is a coding methodology but does include methodologies for how states and outcomes are
to be achieved.
457
Review Question 3
Which term defines the ability to apply one or more operations against a resource, resulting in the same
outcome every time?
Declarative
■■ Idempotency
Configuration drift
Technical debt
Explanation
Idempotency is the correct answer. It is a mathematical term that can be
used in the context of Infrastructure as Code and Configuration as Code, as
the ability to apply one or more operation against a resource, resulting in
the same outcome.
In Complete mode, Resource Manager deletes resources that exist in the resource group but aren't specified
in the template. For example, only resources defined in the template will be present in the resource group
after the template is deployed. As a best practice, use the Complete mode for production environments
where possible, to try to achieve idempotency in your deployment templates.
Module 14 Using Third Party Infrastructure as
Code Tools Available with Azure
Module overview
Module overview
Configuration management tools enable changes and deployments to be faster, repeatable, scalable,
predictable, and able to maintain the desired state, which brings controlled assets into an expected state.
Some advantages of using configuration management tools include:
●● Adherence to coding conventions that make it easier to navigate code
●● Idempotency, which means that the end state remains the same, no matter how many times the code
is executed
●● Distribution design to improve managing large numbers of remote servers
Some configuration management tools use a pull model, in which an agent installed on the servers runs
periodically to pull the latest definitions from a central repository and apply them to the server. Other
tools use a push model, where a central server triggers updates to managed servers.
Configuration management tools enables the use of tested and proven software development practices
for managing and provisioning data centers in real-time through plaintext definition files.
Learning objectives
After completing this module, students will be able to:
●● Deploy and configure infrastructure using 3rd party tools and services with Azure, such as Chef,
Puppet, Ansible, and Terraform
460
Chef
What is Chef?
Chef Infra is an infrastructure automation tool that you use for deploying, configuring, managing, and
ensuring compliance of applications and infrastructure. It provides for a consistent deployment and
management experience.
Chef Infra helps you to manage your infrastructure in the cloud, on-premises, or in a hybrid environment
by using instructions (or recipes) to configure nodes. A node , or chef-client is any physical or virtual
machine (VM), cloud, or network device that is under management by Chef Infra.
The following diagram is of the high-level Chef Infra architecture:
●● Chef Workstation This is the Admin workstation where you create policies and execute management
commands. You run the knife command from the Chef Workstation to manage your infrastructure.
Chef Infra also uses concepts called cookbooks and recipes. Chef Infra cookbooks and recipes are essen-
tially the policies that you define and apply to your servers.
Chef Automate
You can deploy Chef on Microsoft Azure from the Azure Marketplace using the Chef Automate image.
Chef Automate is a Chef product that allows you to package and test your applications, and provision and
update your infrastructure. Using Chef, you can manage changes to your applications and infrastructure
using compliance and security checks, and dashboards that give you visibility into your entire stack.
The Chef Automate image is available on the Azure Chef Server and has all the functionality of the legacy
Chef Compliance server. You can build, deploy, and manage your applications and infrastructure on
Azure. Chef Automate is available from the Azure Marketplace, and you can try it out with a free 30-day
license. You can deploy it in Azure straight away.
habitat is a container, a bare metal machine, or platform as a service (PaaS) is no longer the focus and
does not constrain the application.
For more information about Habitat, go to Use Habitat to deploy your application to Azure1.
●● InSpec is a free and open-source framework for testing and auditing your applications and infrastruc-
ture. InSpec works by comparing the actual state of your system with the desired state that you
express in easy-to-read and easy-to-write InSpec code. InSpec detects violations and displays findings
in the form of a report, but you are in control of remediation.
You can use InSpec to validate the state of your VMs running in Azure. You can also use InSpec to
scan and validate the state of resources and resource groups inside a subscription.
More information about InSpec is available at Use InSpec for compliance automation of your
Azure infrastructure2.
Chef Cookbooks
Chef uses a cookbook to define a set of commands that you execute on your managed client. A cook-
book is a set of tasks that you use to configure an application or feature. It defines a scenario, and
everything required to support that scenario. Within a cookbook, there are a series of recipes, which
define a set of actions to perform. Cookbooks and recipes are written in the Ruby language.
After you create a cookbook, you can then create a Role. A Role defines a baseline set of cookbooks and
attributes that you can apply to multiple servers. To create a cookbook, you use the chef generate
cookbook command.
Create a cookbook
Before creating a cookbook, you first configure your Chef workstation by setting up the Chef Develop-
ment Kit on your local workstation. You'll use the Chef workstation to connect to and manage your Chef
server.
✔️ Note: You can download and install the Chef Development Kit from Chef downloads3.
Choose the Chef Development Kit that is appropriate to your operating system and version. For example:
●● macOSX/macOS
●● Debian
●● Red Hat Enterprise Linux SUSE
●● Linux Enterprise Server
●● Ubuntu
●● Windows
1. Installing the Chef Development Kit creates the Chef workstation automatically in your C:\Chef
directory. After installation completes, run the following example command that calls the Cookbook
web server for a policy that automatically deploys IIS:
chef generate cookbook webserver
1 https://docs.microsoft.com/en-us/azure/chef/chef-habitat-overview
2 https://docs.microsoft.com/en-us/azure/chef/chef-inspec-overview
3 https://downloads.chef.io/chefdk
463
This command generates a set of files under the directory C:\Chef\cookbooks\webserver. Next, you
need to define the set of commands that you want the Chef client to execute on your managed VM. The
commands are stored in the default.rb file.
2. For this example, we will define a set of commands that installs and starts Microsoft Internet Informa-
tion Services (IIS), and copies a template file to the wwwroot folder. Modify the C:\chef\cookbooks\
webserver\recipes\default.rb file by adding the following lines:
powershell_script 'Install IIS' do
action :run
end
service 'w3svc' do
end
template 'c:\inetpub\wwwroot\Default.htm' do
source 'Default.htm.erb'
end
●● Upload your cookbooks and recipes to the Chef Automate server using the following command:
knife cookbook upload < cookbook name> --include-dependencies
●● Create a role to define a baseline set of cookbooks and attributes that you can apply to multiple
servers. Use the following command to create this role:
knife role create < role name >
●● Bootstrap the a node or client and assign a role using the following command:
knife bootstrap < FQDN-for-App-VM > --ssh-user <app-admin-username>
--ssh-password <app-vm-admin-password> --node-name < node name > --run-
list role[ < role you defined > ] --sudo --verbose
You can also bootstrap Chef VM extensions for the Windows and Linux operating systems, in addition to
provisioning them in Azure using the Knife command. For more information, look up the ‘cloud-api’
bootstrap option in the Knife plugin documentation at https://github.com/chef/knife-azure4.
✔️ Note: You can also install the Chef extensions to an Azure VM using Windows PowerShell. By installing
the Chef Management Console, you can manage your Chef server configuration and node deployments
via a browser window.
4 https://github.com/chef/knife-azure
465
Puppet
What is Puppet?
Puppet is a deployment and configuration management toolset that provides you with enterprise tools
that you need to automate an entire lifecycle on your Azure infrastructure. It also provides consistency
and transparency into infrastructure changes.
Puppet provides a series of open-source configuration management tools and projects. It also provides
Puppet Enterprise, which is a configuration management platform that allows you to maintain state in
both your infrastructure and application deployments.
5 https://azure.microsoft.com/en-us/marketplace/
466
Manifest files
Puppet uses a declarative file syntax to define state. It defines what the infrastructure state should be, but
not how it should be achieved. You must tell it you want to install a package, but not how you want to
install the package.
Configuration or state is defined in manifest files known as Puppet Program files. These files are responsi-
ble for determining the state of the application and have the file extension .pp.
Puppet program files have the following elements:
●● class. This is a bucket that you put resources into. For example, you might have an Apache class with
everything required to run Apache (such as the package, config file. running server, and any users that
need to be created). That class then becomes an entity that you can use to compose other workflows.
●● resources. These are single elements of your configuration that you can specify parameters for.
●● module. This is the collection of all the classes, resources, and other elements of the Puppet program
file in a single entity.
class mrpapp {
class { 'configuremongodb': }
class { 'configurejava': }
}
class configuremongodb {
include wget
class { 'mongodb': }->
wget::fetch { 'mongorecords':
source => 'https://raw.githubusercontent.com/Microsoft/PartsUnlimitedM-
RP/master/deploy/MongoRecords.js',
destination => '/tmp/MongoRecords.js',
timeout => 0,
}->
exec { 'insertrecords':
command => 'mongo ordering /tmp/MongoRecords.js',
path => '/usr/bin:/usr/sbin',
unless => 'test -f /tmp/initcomplete'
}->
file { '/tmp/initcomplete':
ensure => 'present',
}
}
class configurejava {
467
include apt
$packages = ['openjdk-8-jdk', 'openjdk-8-jre']
apt::ppa { 'ppa:openjdk-r/ppa': }->
package { $packages:
ensure => 'installed',
}
}
You can download customer Puppet modules that Puppet and the Puppet community have created from
puppetforge6. Puppetforge is a community repository that contains thousands of modules for download
and use, or modification as you need. This saves you the time necessary to recreate modules from
scratch.
6 https://forge.puppet.com/
468
Ansible
What is Ansible?
Ansible is an open-source platform by Red Hat that automates cloud provisioning, configuration man-
agement, and application deployments. Using Ansible, you can provision VMs, containers, and your entire
cloud infrastructure. In addition to provisioning and configuring applications and their environments,
Ansible enables you to automate deployment and configuration of resources in your environment such
as virtual networks, storage, subnets, and resources groups.
Ansible is designed for multiple tier deployments. Unlike Puppet or Chef, Ansible is agentless, meaning
you don't have to install software on the managed machines.
Ansible also models your IT infrastructure by describing how all your systems interrelate, rather than
managing just one system at a time.
Ansible workflow
The following workflow and component diagram outlines how playbooks can run in different circum-
stances, one after another. In the workflow, Ansible playbooks:
1. Provision resources. Playbooks can provision resources. In the following diagram, playbooks create
load-balancer virtual networks, network security groups, and VM scale sets on Azure.
2. Configure the application. Playbooks can deploy applications to run services, such as installing Apache
Tomcat on a Linux machine to allow you to run a web application.
3. Manage future configurations to scale. Playbooks can alter configurations by applying playbooks to
existing resources and applications—in this instance to scale the VMs.
In all cases, Ansible makes use of core components such as roles, modules, APIs, plugins, inventory, and
other components.
✔️ Note: By default, Ansible manages machines using the ssh protocol.
✔️ Note: You don't need to maintain and run commands from any central server. Instead, there is a
control machine with Ansible installed, and from which playbooks are run.
469
Ansible components
Ansible models your IT infrastructure by describing how all your systems interrelate, rather than just
managing one system at a time. The core components of Ansible are:
●● Control Machine. This is the machine from which the configurations are run. It can be any machine
with Ansible installed on it. However, it requires that Python 2 or Python 3 be installed on the control
machine as well. You can have multiple control nodes, laptops, shared desktops, and servers all
running Ansible.
●● Managed Nodes. These are the devices and machines (or just machines) and environments that are
being managed. Managed nodes are sometimes referred to as hosts. Ansible is not installed on nodes.
●● Playbooks. Playbooks are ordered lists of tasks that have been saved so you can run them repeatedly
in the same order. Playbooks are Ansible’s language for configuration, deployment, and orchestration.
They can describe a policy that you want your remote systems to enforce, or they can dictate a set of
steps in a general IT process.
When you create a playbook, you do so by using YAML, which defines a model of a configuration or
process, and uses a declarative model. Elements such as name, hosts, and tasks reside within play-
books.
●● Modules. Ansible works by connecting to your nodes, and then pushing small programs (or units of
code)—called modules—out to the nodes. Modules are the units of code that define the configuration.
They are modular and can be reused across playbooks. They represent the desired state of the system
(declarative), are executed over SSH by default, and are removed when finished.
A playbook is typically made up of many modules. For example, you could have one playbook
containing three modules: a module for creating an Azure Resource group, a module for creating a
virtual network, and a module for adding a subnet.
Your library of modules can reside on any machine, and do not require any servers, daemons, or
databases. Typically, you’ll work with your favorite terminal program, a text editor, and most likely a
version control system to track changes to your content. A complete list of available modules is
available on Ansible's All modules7 page.
You can preview Ansible Azure modules on the Ansible Azure preview modules8 webpage.
●● Inventory. An inventory is a list of managed nodes. Ansible represents what machines it manages
using a .INI file that puts all your managed machines in groups of your own choosing. When adding
new machines, you don't need to use additional SSL-signing servers, thus avoiding Network Time
Protocol (NTP) and Domain Name System (DNS) issues. You can create the inventory manually, or for
Azure, Ansible supports dynamic inventories> This means that the host inventory is dynamically
generated at runtime. Ansible supports host inventories for other managed hosts as well.
●● Roles. Roles are predefined file structures that allow automatic loading of certain variables, files, tasks,
and handlers, based on the file's structure. It allows for easier sharing of roles. You might, for example,
create roles for a web server deployment.
●● Facts. Facts are data points about the remote system that Ansible is managing. When a playbook is
run against a machine, Ansible will gather facts about the state of the environment to determine the
state before executing the playbook.
●● Plug-ins. Plug-insare code that supplements Ansible's core functionality.
7 https://docs.ansible.com/ansible/latest/modules/list_of_all_modules.html
8 https://galaxy.ansible.com/Azure/azure_preview_modules
470
Installing Ansible
To enable a machine to act as the control machine from which to run playbooks, you need to install both
Python and Ansible.
Python
When you install Python, you must install either Python 2 (version 2.7), or Python 3 (versions 3.5 and
later). You can use pip, the Python package manager, to install Python, or you can use other installation
methods.
Ansible on Linux
You can install Ansible on many different distributions of Linux, including, but not limited to:
●● Red Hat Enterprise Linux
●● CentOS
●● Debian
●● Ubuntu
●● Fedora
✔️ Note: Fedora is not supported as an endorsed Linux distribution on Azure. However, you can run it on
Azure by uploading your own image. All other Linux distributions are supported on Azure as endorsed by
Linux.
You can use the appropriate package manager software to install Ansible and Python, such as yum, apt,
or pip. For example, to install Ansible on Ubuntu, run the following command:
## Install pre-requisite packages
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev py-
thon-dev python-pip
## Install Ansible and Azure SDKs via pip
sudo pip install ansible[azure]
macOS
You can also install Ansible and Python on macOS and use that environment as the control machine.
471
Upgrading Ansible
When Ansible manages remote machines, it doesn't leave software installed or running on them. There-
fore, there’s no real question about how to upgrade Ansible when moving to a new version.
Managed nodes
When managing nodes, you need a way to communicate on the managed nodes or environments, which
is normally using SSH by default. This uses the SSH file transfer protocol. If that’s not available, you can
switch to Simple Control Protocol (SCP), which you can do in ansible.cfg. For Windows machines, use
Windows PowerShell.
You can find out more about installing Ansible on the Install Ansible on Azure virtual machines9 page.
Ansible on Azure
There are several ways you can use Ansible in Azure.
Azure marketplace
You can use one of the following images available as part of the Azure Marketplace:
●● Red Hat Ansible on Azure is available as an image on Azure Marketplace, and it provides a fully
configured version. This enables easier adoption for those looking to use Ansible as their provisioning
and configuration management tool. This solution template will install Ansible on a Linux VM along
with tools configured to work with Azure. This includes:
●● Ansible (the latest version by default. You can also specify a version number.)
●● Azure CLI 2.0
●● MSI VM extension
●● apt-transport-https
9 https://docs.microsoft.com/en-us/azure/virtual-machines/linux/ansible-install-configure?toc=%2Fen-us%2Fazure%2Fansible%2Ftoc.
json&bc=%2Fen-us%2Fazure%2Fbread%2Ftoc.json
472
●● Ansible Tower (by Red Hat). Ansible Tower by Red Hat helps organizations scale IT automation and
manage complex deployments across physical, virtual, and cloud infrastructures. Built on the proven
open-source Ansible automation engine, Ansible Tower includes capabilities that provide additional
levels of visibility, control, security, and efficiency necessary for today's enterprises. With Ansible
Tower you can:
●● Provision Azure environments with ease using pre-built Ansible playbooks.
●● Use role-based access control (RBAC) for secure, efficient management.
●● Maintain centralized logging for complete auditability and compliance.
●● Utilize the large community of content available on Ansible Galaxy.
This offering requires the use of an available Ansible Tower subscription eligible for use in Azure. If you
don't currently have a subscription, you can obtain one directly from Red Hat.
Azure VMs
Another option for running Ansible on Azure is to deploy a Linux VM on Azure virtual machines, which is
infrastructure as a service (IaaS). You can then install Ansible and the relevant components and use that
as the control machine.
✔️ Note: The Windows operating system is not supported as a control machine. However, you can run
Ansible from a Windows machine by utilizing other services and products such as Windows Subsystem
for Linux, Azure Cloud Shell, and Visual Studio Code.
For more details about running Ansible in Azure, visit:
●● Ansible on Azure documentation10 website
●● Microsoft Azure Guide11
Playbook structure
Playbooks are the language of Ansible's configurations, deployments, and orchestrations. You use them
to manage configurations of and deployments to remote machines. Playbooks are structured with YAML
(a data serialization language), and support variables. Playbooks are declarative and include detailed
information regarding the number of machines to configure at a time.
YML structure
YAML is based around the structure of key-value pairs. In the following example, the key is name, and the
value is namevalue:
name: namevalue
In the YAML syntax, a child key value pair is placed on new, and indented, line below its parent key. Each
sibling key value pair occurs on a new line at the same level of indentation as its sibling key value pair.
parent:
children:
first-sibling: value01
10 https://docs.microsoft.com/en-us/azure/ansible/?ocid=AID754288&wt.mc_id=CFID0352
11 https://docs.ansible.com/ansible/latest/scenario_guides/guide_azure.html
473
second-sibling: value02
The specific number of spaces used for indentation is not defined. You can indent each level by as many
spaces as you want. However, the number of spaces used for indentations at each level must be uniform
throughout the file.
When there is indentation in a YAML file, the indented key value pair is the value of it parent key.
Playbook components
The following list is of some of the playbook components:
●● name. The name of the playbook. This can be any name you wish.
●● hosts. Lists where the configuration is applied, or machines being targeted. Hosts can be a list of one
or more groups or host patterns, separated by colons. It can also contain groups such as web servers
or databases, providing that you have defined these groups in your inventory.
●● connection. Specifies the connection type.
●● remote_user. Specifies the user that will be connected to for completing the tasks.
●● var. Allows you to define the variables that can be used throughout your playbook.
●● gather_facts. Determines whether to gather node data or not. The value can be yes or no.
●● tasks. Indicates the start of the modules where the actual configuration is defined.
Running a playbook
You run a playbook using the following command:
ansible-playbook < playbook name >
You can also check the syntax of a playbook using the following command.
ansible-playbook --syntax-check
The syntax check command runs a playbook through the parser to verify that it has included items, such
as files and roles, and that the playbook has no syntax errors. You can also use the --verbose com-
mand.
●● To see a list of hosts that would be affected by running a playbook, run the command:
ansible-playbook playbook.yml --list-hosts
Sample playbook
The following code is a sample playbook that will create a Linux virtual machine in Azure:
- name: Create Azure VM
hosts: localhost
connection: local
vars:
resource_group: ansible_rg5
location: westus
474
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}"
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: myResourceGroup
name: myVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: myResourceGroup
name: mySubnet
address_prefix: "10.0.1.0/24"
virtual_network: myVnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: myResourceGroup
allocation_method: Static
name: myPublicIP
register: output_ip_address
- name: Dump public IP for VM which will be created
debug:
msg: "The public IP is {{ output_ip_address.state.ip_address }}."
- name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: myResourceGroup
name: myNetworkSecurityGroup
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound
- name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: myResourceGroup
name: myNIC
virtual_network: myVnet
subnet: mySubnet
public_ip_name: myPublicIP
security_group: myNetworkSecurityGroup
- name: Create VM
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: false
475
ssh_public_keys:
- path: /home/azureuser/.ssh/authorized_keys
key_data: <your-key-data>
network_interfaces: myNIC
image:
offer: CentOS
publisher: OpenLogic
sku: '7.5'
version: latest
✔️ Note: Ansible Playbook samples for Azure are available on GitHub on the Ansible Playbook Samples
for Azure12 page.
Run commands
Azure Cloud Shell has Ansible preinstalled. After you are signed into Azure Cloud Shell, specify the bash
console. You do not need to install or configure anything further to run Ansible commands from the Bash
console in Azure Cloud Shell.
Editor
You can also use the Azure Cloud Shell editor to review, open, and edit your playbook .yml files. You can
open the editor by selecting the curly brackets icon on the Azure Cloud Shell taskbar.
12 https://github.com/Azure-Samples/ansible-playbooks
476
13 https://shell.azure.com
477
location: eastus
10. Verify that you receive output like the following code:
PLAY [localhost] **********************************************************
***********************
11. Open Azure portal and verify that the resource group is now available in the portal.
478
14 https://code.visualstudio.com/
479
You can also view details of this extension on the Visual Studio Marketplace Ansible 15 page.
5. In Visual Studio Code, go to View > Command Palette…. Alternatively, you can select the settings
(cog) icon in the bottom, left corner of the Visual Studio Code window, and then select Command
Palette.
15 https://marketplace.visualstudio.com/items?itemName=vscoss.vscode-ansible&ocid=AID754288&wt.mc_id=CFID0352
480
7. When a browser launches and prompts you to sign in, select your Azure account. Verify that a mes-
sage displays stating that you are now signed in and can close the page.
8. Verify that your Azure account now displays at the bottom of the Visual Studio Code window.
9. Create a new file and paste in the following playbook text:
- name: Create Azure VM
hosts: localhost
connection: local
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: myResourceGroup
location: eastus
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: myResourceGroup
name: myVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: myResourceGroup
name: mySubnet
address_prefix: "10.0.1.0/24"
virtual_network: myVnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: myResourceGroup
481
allocation_method: Static
name: myPublicIP
register: output_ip_address
- name: Dump public IP for VM which will be created
debug:
msg: "The public IP is {{ output_ip_address.state.ip_address }}."
- name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: myResourceGroup
name: myNetworkSecurityGroup
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound
- name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: myResourceGroup
name: myNIC
virtual_network: myVnet
subnet: mySubnet
public_ip_name: myPublicIP
security_group: myNetworkSecurityGroup
- name: Create VM
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: true
admin_password: Password0134
network_interfaces: myNIC
image:
offer: CentOS
publisher: OpenLogic
sku: '7.5'
version: latest
13. A notice might appear in the bottom, left side, informing you that the action could incur a small
charge as it will use some storage when the playbook is uploaded to cloud shell. Select Confirm &
Don't show this message again.
14. Verify that the Azure Cloud Shell pane now displays in the bottom of Visual Studio Code and is
running the playbook.
483
15. When the playbook finishes running, open Azure and verify the resource group, resources, and VM
have all been created. If you have time, sign in with the username and password specified in the
playbook to verify as well.
✔️ Note: If you want to use a public or private key pair to connect to the Linux VM, instead of a user-
name and password you could use the following code in the previous Create VM module steps:
admin_username: adminUser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert your ssh public key here... >
484
Terraform
What is Terraform?
HashiCorp Terraform is an open-source tool that allows you to provision, manage, and version cloud
infrastructure. It codifies infrastructure in configuration files that describes the topology of cloud resourc-
es such as VMs, storage accounts, and networking interfaces.
Terraform's command-line interface (CLI) provides a simple mechanism to deploy and version the
configuration files to Azure or any other supported cloud service. The CLI also allows you to validate and
preview infrastructure changes before you deploy them.
Terraform also supports multi-cloud scenarios. This means it enables developers to use the same tools
and configuration files to manage infrastructure on multiple cloud providers.
You can run Terraform interactively from the CLI with individual commands, or non-interactively as part of
a continuous integration pipeline.
There is also an enterprise version of Terraform available, Terraform Enterprise.
You can view more details about Terraform on the HashiCorp Terraform16 website.
Terraform components
Some of Terraform’s core components include:
●● Configuration files. Text-based configuration files allow you to define infrastructure and application
configuration. These files end in the .tf or .tf.json extension. The files can be in either of the following
two formats:
●● Terraform. The Terraform format is easier for users to review, thereby making it more user friendly.
It supports comments and is the generally recommended format for most Terraform files. Terra-
form files ends in .tf
●● JSON. The JSON format is mainly for use by machines for creating, modifying, and updating
configurations. However, it can also be used by Terraform operators if you prefer. JSON files end in
.tf.json.
The order of items (such as variables and resources) as defined within the configuration file does not
matter, because Terraform configurations are declarative.
●● Terraform CLI. This is a command-line interface from which you run configurations. You can run
command such as Terraform apply and Terraform plan, along with many others. A CLI configuration
file that configures per-user setting for the CLI is also available. However, this is separate from the CLI
infrastructure configuration. In Windows operating system environments, the configuration file is
named terraform.rc and is stored in the relevant user's %APPDATA% directory. On Linux systems, the
file is named .terraformrc (note the leading period) and is stored in the home directory of the
relevant user.
16 https://www.terraform.io/
485
●● Modules. Modules are self-contained packages of Terraform configurations that are managed as a
group. You use modules to create reusable components in Terraform and for basic code organization.
A list of available modules for Azure is available on the Terraform Registry Modules17 webpage.
●● Provider. The provider is responsible for understanding API interactions and exposing resources.
●● Overrides. Overrides are a way to create configuration files that are loaded last and merged into
(rather than appended to) your configuration. You can create overrides to modify Terraform behavior
without having to edit the Terraform configuration. They can also be used as temporary modifications
that you can make to Terraform configurations without having to modify the configuration itself.
●● Resources. Resources are sections of a configuration file that define components of your infrastruc-
ture, such as VMs, network resources, containers, dependencies, or DNS records. The resource block
creates a resource of the given TYPE (first parameter) and NAME (second parameter). However, the
combination of the type and name must be unique. The resource's configuration is then defined and
contained within braces.
●● Execution plan. You can issue a command in the Terraform CLI to generate an execution plan. The
execution plan shows what Terraform will do when a configuration is applied. This enables you to
verify changes and flag potential issues. The command for the execution plan is Terraform plan.
●● Resource graph. Using a resource graph, you can build a dependency graph of all resources. You can
then create and modify resources in parallel. This helps provision and configure resources more
efficiently.
Terraform on Azure
You download Terraform for use in Azure via: Azure Marketplace, Terraform Marketplace, or Azure VMs.
Azure Marketplace
Azure Marketplace offers a fully configured Linux image containing Terraform with the following charac-
teristics:
●● The deployment template will install Terraform on a Linux (Ubuntu 16.04 LTS) VM along with tools
configured to work with Azure. Items downloaded include:
●● Terraform (latest)
●● Azure CLI 2.0
●● Managed Service Identity (MSI) VM extension
●● Unzip
●● Jq
●● apt-transport-https
●● This image also configures a remote back-end to enable remote state management using Terraform.
Terraform Marketplace
The Terraform Marketplace image makes it easy to get started using Terraform on Azure, without having
to install and configure Terraform manually. There are no software charges for this Terraform VM image.
17 https://registry.terraform.io/browse?provider=azurerm
486
You pay only the Azure hardware usage fees that are assessed based on the size of the VM that's provi-
sioned.
Azure VMs
You can also deploy a Linux or Windows VM in Azure VM's IaaS service, install Terraform and the relevant
components, and then use that image.
Installing Terraform
To get started, you must install Terraform on the machine from which you are running the Terraform
commands.
Terraform can be installed on Windows, Linux or macOS environments. Go to the Download Terraform18
page and choose the appropriate download package for your environment.
18 https://www.terraform.io/downloads.html
487
Linux
1. Download Terraform using the following command:
wget https://releases.hashicorp.com/terraform/0.xx.x/terraform_0.xx.x_li-
nux_amd64.zip
4. Verify the installation by running the command Terraform. Verify that the Terraform help output
displays.
488
#!/bin/sh
echo "Setting environment variables for Terraform"
export ARM_SUBSCRIPTION_ID=your_subscription_id
export ARM_CLIENT_ID=your_appId
export ARM_CLIENT_SECRET=your_password
export ARM_TENANT_ID=your_tenant_id
✔️ Note: After you install Terraform before you can apply config .tf files, you must run the following
command to initialize Terraform for the installed instance:
Terraform init
tags {
490
tags {
environment = "Terraform Demo"
}
}
# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.
name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.
name}"
address_prefix = "10.0.1.0/24"
}
tags {
environment = "Terraform Demo"
}
}
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
491
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags {
environment = "Terraform Demo"
}
}
ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsub-
net.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraform-
publicip.id}"
}
tags {
environment = "Terraform Demo"
}
}
byte_length = 8
}
account_replication_type = "LRS"
tags {
environment = "Terraform Demo"
}
}
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
os_profile {
computer_name = "myvm"
admin_username = "azureuser"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}
boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_
blob_endpoint}"
}
tags {
493
The following image is an example of running Terraform in Azure Cloud Shell with a Bash shell.
494
Editor
You can also use the Azure Cloud Shell editor to review, open, and edit your .tf files. To open the editor,
select the braces on the Azure Cloud Shell taskbar.
495
Prerequisites
●● You do require an Azure subscription to perform these steps. If you don't have one you can create
one by following the steps outlined on the Create your Azure free account today19 webpage.
Steps
The following steps outline how to create a resource group in Azure using Terraform in Azure Cloud Shell,
with bash.
1. Open the Azure Cloud Shell at https://shell.azure.com. You can also launch Azure Cloud Shell
from within the Azure portal by selecting the Azure Cloud Shell icon.
2. If prompted, authenticate to Azure by entering your credentials.
3. In the taskbar, ensure that Bash is selected as the shell type.
4. Create a new .tf file and open the file for editing with the following command:
vi terraform-createrg.tf
19 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
496
provider "azurerm" {
}
resource "azurerm_resource_group" "rg" {
name = "testResourceGroup"
location = "westus"
}
10. Run the configuration .tf file with the following command:
terraform apply
You should receive a prompt to indicate that a plan has been generated. Details of the changes should be
listed, followed by a prompt to apply or cancel the changes.
497
11. Enter a value of yes, and then select Enter. The command should run successfully, with output similar
to the following screenshot.
12. Open Azure portal and verify the new resource group now displays in the portal.
Prerequisites
●● This walkthrough requires Visual Studio Code. If you do not have Visual Studio Code installed, you can
download it from https://code.visualstudio.com/20. Download and install a version of Visual Studio
Code that is appropriate to your operating system environment, for example Windows, Linux, or
macOS.
●● You will require an active Azure subscription to perform the steps in this walkthrough. If you do not
have one, create an Azure subscription by following the steps outlined on the Create your Azure free
account today21 webpage.
Steps
1. Launch the Visual Studio Code editor.
2. The two Visual Studio Code extensions Azure Account and Azure Terraform must be installed. To install
the first extension, from inside Visual Studio Code, select File > Preferences > Extensions.
3. Search for and install the extension Azure Account.
4. Search for and install the extension Terraform. Ensure that you select the extension authored by
Microsoft, as there are similar extensions available from other authors.
20 https://code.visualstudio.com/
21 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
499
You can view more details of this extension at the Visual Studio Marketplace on the Azure Terraform22
page.
5. In Visual Studio Code, open the command palette by selecting View > Command Palette. You can
also access the command palette by selecting the settings (cog) icon on the bottom, left side of the
Visual Studio Code window, and then selecting Command Palette.
6. In the Command Palette search field, type Azure:, and from the results, select Azure: Sign In.
7. When a browser launches and prompts you to sign into Azure, select your Azure account. The mes-
sage You are signed in now and can close this page., should display in the browser.
22 https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureterraform
500
8. Verify that your Azure account now displays at the bottom of the Visual Studio Code window.
9. Create a new file, then copy the following code and paste it into the file.
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
name = "terraform-rg2"
location = "eastus"
tags {
environment = "Terraform Demo"
}
}
tags {
environment = "Terraform Demo"
}
}
# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.
name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.
name}"
address_prefix = "10.0.1.0/24"
}
tags {
environment = "Terraform Demo"
}
}
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags {
environment = "Terraform Demo"
}
}
ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsub-
net.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraform-
publicip.id}"
502
tags {
environment = "Terraform Demo"
}
}
byte_length = 8
}
tags {
environment = "Terraform Demo"
}
}
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
503
sku = "16.04.0-LTS"
version = "latest"
}
os_profile {
computer_name = "myvm"
admin_username = "azureuser"
admin_password = "Password0134!"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_
blob_endpoint}"
}
tags {
environment = "Terraform Demo"
}
}
10. Save the file locally with the file name terraform-createvm.tf.
11. In Visual Studio Code,select View > Command Palette. Search for the command by entering terra-
form into the search field. Select the following command from the dropdown list of commands:
Azure Terraform: apply
12. If Azure Cloud Shell is not open in Visual Studio Code, a message might appear in the bottom, left
corner asking you if you want to open Azure Cloud Shell. Choose Accept and select Yes.
13. Wait for the Azure Cloud Shell pane to appear in the bottom of Visual Studio Code window and start
running the file terraform-createvm.tf. When you are prompted to apply the plan or cancel,
type Yes, and then press Enter.
504
14. After the command completes successfully, review the list of resources created.
15. Open the Azure Portal and verify the resource group, resources, and the VM has been created. If you
have time, sign in with the username and password specified in the .tf config file to verify.
505
Note: If you wanted to use a public or private key pair to connect to the Linux VM instead of a username
and password, you could use the os_profile_linux_config module, set the disable_password_authenti-
cation key value to true and include the ssh key details, as in the following code.
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}
You'd also need to remove the password value in the os_profile module that present in the example
above.
Note: You could also embed the Azure authentication within the script. In that case, you would not need
to install the Azure account extension, as in the following example:
provider "azurerm" {
subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
506
Labs
Lab 14a: Ansible with Azure
Lab overview
In this lab we will deploy, configure, and manage Azure resources by using Ansible.
Ansible is declarative configuration management software. It relies on a description of the intended
configuration applicable to managed computers in the form of playbooks. Ansible automatically applies
that configuration and maintains it going forward, addressing any potential discrepancies. Playbooks are
formatted by using YAML.
Unlike majority of other configuration management tools, such as Puppet or Chef, Ansible is agentless,
which means that it does not require the installation of any software in the managed machines. Ansible
uses SSH to manage Linux servers and Powershell Remoting to manage Windows servers and clients.
In order to interact with resources other than operating systems (such as, for example, Azure resources
accessible via Azure Resource Manager), Ansible supports extensions called modules. Ansible is written in
Python so, effectively, the modules are implemented as Python libraries. In order to manage Azure
resources, Ansible relies on GitHub-hosted modules23.
Ansible requires that the managed resources are specified in a designated host inventory. Ansible
supports dynamic inventories for some systems, including Azure, so that the host inventory is dynamically
generated at runtime.
The lab will consist of the following high-level steps:
●● Installing and configuring Ansible on the Azure VM
●● Downloading Ansible configuration and sample playbook files
●● Creating and configuring a managed identity in Azure AD
●● Configuring Azure AD credentials and SSH for use with Ansible
●● Deploying an Azure VM by using an Ansible playbook
●● Configuring an Azure VM by using an Ansible playbook
Objectives
After you complete this lab, you will be able to:
●● Install and configure Ansible on Azure VM
●● Download Ansible configuration and sample playbook files
●● Create and configure Azure Active Directory managed identity
●● Configure Azure AD credentials and SSH for use with Ansible
●● Deploy an Azure VM by using an Ansible playbook
●● Configure an Azure VM by using an Ansible playbook
23 https://github.com/ansible-collections/azure
507
Lab duration
●● Estimated time: 90 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions24
Objectives
After you complete this lab, you will be able to:
●● Use Terraform to implement Infrastructure as Code
●● Automate infrastructure deployments in Azure with Terraform and Azure Pipelines
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions26
24 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
25 https://www.terraform.io/intro/index.html
26 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
508
Review Question 2
Which of the following are open-source products that are integrated into the Chef Automate image availa-
ble from Azure Marketplace?
Habitat
Facts
Console Services
InSpec
Review Question 3
Which of the following are core components of the Puppet automation platform?
(chose all that apply)
Master
Agent
Facts
Habitat
Review Question 4
Complete the following sentence.
The main elements of a Puppet Program (PP) Manifest file are class, resource and ________.
Module
Habitat
InSpec
Cookbooks
509
Review Question 5
Which of the following platforms use agents to communicate with target machines?
(choose all that apply)
Puppet
Chef
Ansible
Review Question 6
True or false: The control machine in Ansible must have Python installed?
True
False
Review Question 7
Which of the following statements about the cloud-init package are correct?
The --custom-data parameter passes the name of the configuration file (.txt).
Configuration files (.txt) are encoded in base64.
The YML syntax is used within the configuration file (.txt).
cloud-init works across Linux distributions.
Review Question 8
True or false: Terraform ONLY supports configuration files with the file extension .tf.
True
False
Review Question 9
Which of the following core Terraform components can modify Terraform behavior, without having to edit
the Terraform configuration?
Configuration files
Overrides
Execution plan
Resource graph
510
Answers
Review Question 1
Which of the following are main architectural components of Chef?
(choose all that apply)
■■ Chef Server
Chef Facts
■■ Chef Client
■■ Chef Workstation
Explanation
The correct answers are Chef Server, Chef Client and Chef Workstation.
Chef Facts is an incorrect answer.
Chef Facts is not an architectural component of Chef. Chef Facts misrepresents the term 'Puppet Facts'.
Puppet Facts are metadata used to determine the state of resources managed by the Puppet automation
tool.
Chef has the following main architectural components. 'Chef Server' is the Chef management point. The
two options for the Chef Server are 'hosted' and 'on-premises'. 'Chef Client (node)' is an agent that sits on
the servers you are managing. 'Chef Workstation' is an Administrator workstation where you create Chef
policies and execute management commands. You run the Chef 'knife' command from the Chef Worksta-
tion to manage your infrastructure.
Review Question 2
Which of the following are open-source products that are integrated into the Chef Automate image
available from Azure Marketplace?
■■ Habitat
Facts
Console Services
■■ InSpec
Explanation
The correct answers are Habitat and InSpec.
Facts and Console Services are incorrect answers.
Facts are metadata used to determine the state of resources managed by the Puppet automation tool.
Console Services is a web-based user interface for managing your system with the Puppet automation tool.
Habitat and InSpec are two open-source products that are integrated into the Chef Automate image
available from Azure Marketplace. Habitat makes the application and its automation the unit of deploy-
ment, by allowing you to create platform-independent build artifacts called 'habitats' for your applications.
InSpec allows you to define desired states for your applications and infrastructure. InSpec can conduct audits
to detect violations against your desired state definitions and generate reports from its audit results.
511
Review Question 3
Which of the following are core components of the Puppet automation platform?
(chose all that apply)
■■ Master
■■ Agent
■■ Facts
Habitat
Explanation
The correct answers are Master, Agent and Facts.
Habitat is an incorrect answer.
Habitat is used with Chef for creating platform-independent build artifacts called for your applications.
Master, Agent and Facts are core components of the Puppet automation platform. Another core component
is 'Console Services'. Puppet Master acts as a center for Puppet activities and processes. Puppet Agent runs
on machines managed by Puppet, to facilitate management. Console Services is a toolset for managing and
configuring resources managed by Puppet. Facts are metadata used to determine the state of resources
managed by Puppet.
Review Question 4
Complete the following sentence.
The main elements of a Puppet Program (PP) Manifest file are class, resource and ________.
■■ Module
Habitat
InSpec
Cookbooks
Explanation
Module is the correct answer.
All other answers are incorrect answers.
Habitat, InSpec and Cookbooks are incorrect because they relate to the Chef automation platform.
The main elements of a Puppet Program (PP) Manifest file are class, resource and module. Classes define
related resources according to their classification, to be reused when composing other workflows. Resources
are single elements of your configuration which you can specify parameters for. Modules are collections of
all the classes, resources, and other elements in a single entity.
Review Question 5
Which of the following platforms use agents to communicate with target machines?
(choose all that apply)
■■ Puppet
■■ Chef
Ansible
Explanation
The correct answers are: Puppet and Chef.
Ansible is an incorrect answer.
Ansible is agentless because you do not need to install an Agent on each of the target machines it manages.
Ansible uses the Secure Shell (SSH) protocol to communicate with target machines. You choose when to
conduct compliance checks and perform corrective actions, instead of using Agents and a Master to perform
512
the file extension .tf.json for Terraform JSON format configuration files. Terraform supports configuration
files in either .tf or .tf.json format. The Terraform .tf format is more human-readable, supports comments,
and is the generally recommended format for most Terraform files. The JSON format .tf.json is meant for
use by machines, but you can write your configuration files in JSON format if you prefer.
Review Question 9
Which of the following core Terraform components can modify Terraform behavior, without having to
edit the Terraform configuration?
Configuration files
■■ Overrides
Execution plan
Resource graph
Explanation
Overrides is the correct answer.
All other answers are incorrect answers.
Configuration files, in .tf or .tf.json format, allow you to define your infrastructure and application configura-
tions with Terraform.
Execution plan defines what Terraform will do when a configuration is applied.
Resource graph builds a dependency graph of all Terraform managed resources.
Overrides modify Terraform behavior without having to edit the Terraform configuration. Overrides can also
be used to apply temporary modifications to Terraform configurations without having to modify the
configuration itself.
Module 15 Managing Containers using Docker
Module overview
Module overview
Containers are the third model of compute, after bare metal and virtual machines – and containers are
here to stay. Docker gives you a simple platform for running apps in containers, old and new apps on
Windows and Linux, and that simplicity is a powerful enabler for all aspects of modern IT. Containers
aren’t only faster and easier to use than VMs; they also make far more efficient use of computing hard-
ware.
Learning objectives
After completing this module, students will be able to:
●● Implement a container strategy including how containers are different from virtual machines and how
microservices use containers
●● Implement containers using Docker
●● Implement Docker multi-stage builds
516
Structure of containers
If you’re a programmer or techie, chances are you’ve at least heard of Docker: a helpful tool for packing,
shipping, and running applications within “containers.” It’d be hard not to; with all the attention it’s
getting these days — from developers and system admins alike. Just to reiterate, there is a difference
between containers and docker. A container is a thing that runs a little program package, while Docker is
the container runtime and orchestrator.
containerizing the application platform and its dependencies, differences in OS distributions and underly-
ing infrastructure are abstracted away.
Virtual Machines
A VM is essentially an emulation of a real computer that executes programs like a real computer. VMs run
on top of a physical machine using a “hypervisor”. As you can see in the diagram, VMs package up the
virtual hardware, a kernel (i.e., OS) and user space for each new VM.
518
Container
Unlike a VM which provides hardware virtualization, a container provides operating-system-level virtual-
ization by abstracting the “user space”. This diagram shows you that containers package up just the user
space, and not the kernel or virtual hardware like a VM does. Each container gets its own isolated user
space to allow multiple containers to run on a single host machine. We can see that all the operating
system level architecture is being shared across containers. The only parts that are created from scratch
are the bins and libs. This is what makes containers so lightweight.
Docker is a software containerization platform with a common toolset, packaging model, and deploy-
ment mechanism, which greatly simplifies containerization and distribution of applications that can be
run anywhere. This ubiquitous technology not only simplifies management by offering the same manage-
ment commands against any host, but it also creates a unique opportunity for seamless DevOps.
From a developer’s desktop to a testing machine, to a set of production machines, a Docker image can
be created that will deploy identically across any environment in seconds. This is a massive and growing
ecosystem of applications packaged in Docker containers, with DockerHub, the public containerized-ap-
plication registry that Docker maintains, currently publishing more than 180,000 applications in the public
community repository. Additionally, to guarantee the packaging format remains universal, Docker
organized the Open Container Initiative (OCI), aiming to ensure container packaging remains an open
and foundation-led format.
As an example of the power of containers, a SQL Server Linux instance can be deployed using a Docker
image in seconds.
For more information, see:
●● Docker Ebook, Docker for the Virtualization Admin1
●● Mark Russinovich blog post on Containers: Docker, Windows, and Trends2
1 https://goto.docker.com/docker-for-the-virtualization-admin.html
2 https://azure.microsoft.com/en-us/blog/containers-docker-windows-and-trends/
520
Note that you can often just execute docker run without needing to first perform docker pull. In that case,
Docker will pull the image and then run it. Next time, it won't need to pull it again.
The most immediately lucrative use for containers has been focused on simplifying DevOps with easy
developer-to-test-to-production flows for services deployed in the cloud or on-premises. But there is
another fast-growing scenario where containers are becoming very compelling.
Microservices is an approach to application development where every part of the application is deployed
as a fully self-contained component, called a microservice, that can be individually scaled and updated.
Containers lend themselves well to this style of development.
Example scenario
Imagine that you are part of a software house that produces a large monolithic financial management
application that you are migrating to a series of microservices. The existing application would include the
code to update the general ledger for each transaction, and it would have this code in many places
throughout the application. If the schema of the general ledger transactions table is modified, this would
require changes throughout the application.
By comparison, the application could be modified to make a notification that a transaction has occurred.
Any microservice that is interested in the transactions could subscribe. In particular, a separate general
ledger microservice could subscribe to the transaction notifications, and then perform the general ledger
related functionality. If the schema of the table that holds the general ledger transactions is modified,
only the general ledger microservice should need to be updated.
If a particular client organization wants to run the application and not use the general ledger, that service
could just be disabled. No other changes to the code would be required.
521
Scale
In a dev/test environment on a single system, while you might have a single instance of each microser-
vice, in production you might scale out to different numbers of instances across a cluster of servers
depending on their resource demands as customer request levels rise and fall. If different teams produce
them, the teams can also independently update them. Microservices is not a new approach to program-
ming, nor is it tied explicitly to containers, but the benefits of Docker containers are magnified when
applied to a complex microservice-based application. Agility means that a microservice can quickly scale
out to meet increased load, the namespace and resource isolation of containers prevents one microser-
vice instance from interfering with others and use of the Docker packaging format and APIs unlocks the
Docker ecosystem for the microservice developer and application operator. With a good microservice
architecture, customers can solve the management, deployment, orchestration, and patching needs of a
container-based service with reduced risk of availability loss while maintaining high agility.
3 https://azure.microsoft.com/en-us/services/container-instances/
4 https://azure.microsoft.com/en-us/services/kubernetes-service/
5 https://azure.microsoft.com/en-us/services/container-registry/
6 https://azure.microsoft.com/en-us/services/service-fabric/
522
Azure Service Fabric allows you to build and operate always-on, scalable, distributed apps. It simplifies
the development of microservice-based applications and their life cycle management including rolling
updates with rollback, partitioning, and placement constraints. It can host and orchestrate containers,
including stateful containers.
Azure App Service7
Azure Web Apps provides a managed service for both Windows and Linux based web applications and
provides the ability to deploy and run containerized applications for both platforms. It provides options
for auto-scaling and load balancing and is easy to integrate with Azure DevOps.
Public
Common public container registries are:
Docker Hub8
Red Hat Container Catalog9
Microsoft Container Registry10
While Microsoft Container Registry offers a public registry, you can also create your own private registries
using it.
Private
Azure Container Registry is a managed, private Docker registry service based on the open-source Docker
Registry 2.0. You can use Azure container registries to store and manage your private Docker container
images and related artifacts. Azure container registries can be used with your existing container develop-
ment and deployment pipelines.
7 https://azure.microsoft.com/en-us/services/app-service/
8 https://hub.docker.com
9 https://access.redhat.com/containers
10 https://mcr.microsoft.com
523
The response from this command, returns the loginserver which has the fully qualified url of the
registry.
{
"adminUserEnabled": false,
"creationDate": "2020-03-08T22:32:13.175925+00:00",
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resource-
Groups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/
myaz400containerregistry",
"location": "eastus",
"loginServer": "myaz400containerregistry.azurecr.io",
"name": "myaz400containerregistry",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sku": {
"name": "Basic",
"tier": "Basic"
},
"status": null,
"storageAccount": null,
"tags": {},
"type": "Microsoft.ContainerRegistry/registries"
}
Log in to registry
Before pushing and pulling container images, you must log in to the registry. To do so, use the az acr
login command.
az acr login --name <acrName>
524
Before you can push an image to your registry, you must tag it with the fully qualified name of your ACR
login server. The login server name is in the format ‘registry-name’.azurecr.io (all lowercase), for example,
myaz400containerregistry.azurecr.io.
docker tag hello-world <acrLoginServer>/hello-world:v1
Finally, use docker push to push the image to the ACR instance. Replace acrLoginServer with the login
server name of your ACR instance. This example creates the hello-world repository, containing the
hello-world:v1 image.
docker push <acrLoginServer>/hello-world:v1
After pushing the image to your container registry, remove the hello-world:v1 image from your local
Docker environment.
docker rmi <acrLoginServer>/hello-world:v1
Clean up resources
When no longer needed, you can use the az group delete command to remove the resource group, the
container registry, and the container images stored there.
az group delete --name myResourceGroup
FROM ubuntu
LABEL maintainer="greglow@contoso.com"
ADD appsetup /
RUN /bin/bash -c 'source $HOME/.bashrc; \
echo $HOME'
CMD ["echo", "Hello World from within the container"]
The RUN command is run when the image is being created by docker build. It is generally used to
configure items within the image.
By comparison, the last line represents a command that will be executed when a new container is created
from the image ie: it is run after container creation.
For more information, you can see:
Dockerfile reference11
11 https://docs.docker.com/engine/reference/builder/
526
The –target option tells docker build that it needs to create an image up to the target of publish, which
was one of the named stages.
Multi-stage Dockerfiles
What are multi-stage Dockerfiles?
Multi-stage builds give the benefits of the builder pattern without the hassle of maintaining three
separate files. Let's look at a multi-stage Dockerfile.
FROM mcr.microsoft.com/dotnet/core/aspnetcore:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
At first, it simply looks like several dockerfiles stitched together. Multi-stage Dockerfiles can be layered or
inherited. When you look closer, there are a couple of key things to realize.
Notice the 3rd stage
FROM build AS publish
build isn't an image pulled from a registry. It's the image we defined in stage 2, where we named the
result of our our -build (sdk) image “builder”. Docker build will create a named image we can later
reference.
We can also copy the output from one image to another. This is the real power to compile our code with
one base sdk image (mcr.microsoft.com/dotnet/core/sdk:3.1), while creating a production
image, based on an optimized runtime image (mcr.microsoft.com/dotnet/core/aspnet:3.1).
Notice the line
COPY --from=publish /app/publish .
This takes the /app/publish directory from the publish image, and copies it to the working directory of
the production image.
Breakdown of stages
The first stage provides the base of our optimized runtime image. Notice it derives from mcr.micro-
soft.com/dotnet/core/aspnet:3.1. This is where we'd specify additional production configura-
tions, such as registry configurations, MSIexec of additional components. Any of those environment
configurations you would hand off to your ops folks to prepare the VM.
The second stage is our build environment. mcr.microsoft.com/dotnet/core/sdk:3.1 This
includes everything we need to compile our code. From here, we have compiled binaries we can publish,
or test. More on testing in a moment.
The 3rd stage derives from our build stage. It takes the compiled output and “publishes” them, in .NET
terms. Publishing simply means take all the output required to deploy your "app/publish/service/compo-
nent" and place it in a single directory. This would include your compiled binaries, graphics (images),
javascript, etc.
The 4th stage is taking the published output, and placing it in the optimized image we defined in the first
stage.
12 https://docs.docker.com/develop/develop-images/multistage-build/
529
And the SDKs can be quite big, not to mention any potential attack surface area. A workaround which is
informally called the builder pattern involves using two Docker images - one to perform a build and
another to ship the results of the first build without the penalty of the build-chain and tooling in the first
image.
An example of the builder pattern:
●● Derive from a dotnet base image with the whole runtime/SDK (Dockerfile.build)
●● Add source code.
●● Produce a statically linked binary.
●● Copy the static binary from the image to the host (docker create, docker cp).
●● Derive from SCRATCH or some other light-weight image (Dockerfile).
●● Add the binary back in.
●● Push a tiny image to the Docker Hub.
This normally meant having two separate Dockerfiles and a shell script to orchestrate all the 7 steps
above. Additionally, the challenge with building on the host, including hosted build agents is we must
first have a build agent with everything we need, including the specific versions. If your dev shop has any
history of .NET Apps, you'll likely have multiple versions to maintain. Which means you have complex
agents to deal with the complexities.
13 https://github.com/SteveLasker/AspNetCoreMultiProject
530
[Api]
Dockerfile
[Web]
Dockerfile
We can now build the solution with a single docker command. We'll use docker-compose as our com-
pose file has our image names as well as the individual build defintions
version: '3'
services:
web:
image: stevelas.azurecr.io/samples/multiproject/web
build:
context: .
dockerfile: Web/Dockerfile
api:
image: stevelas.azurecr.io/samples/multiproject/api
build:
context: .
dockerfile: Api/Dockerfile
Opening a command or powershell window, open the root directory of the solution:
PS> cd C:\Users\stevelas\Documents\GitHub\SteveLasker\AspNetCoreMultiPro-
ject
PS> docker-compose build
14 https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/docker/visual-studio-tools-for-docker?view=aspnetcore-3.1
532
Lab
Lab 15: Deploying Docker Containers to Azure
App Service web apps
Lab overview
In this lab, you will learn how to use an Azure DevOps CI/CD pipeline to build a custom Docker image,
push it to Azure Container Registry, and deploy it as a container to Azure App Service.
Objectives
After you complete this lab, you will be able to:
●● Build a custom Docker image by using an Microsoft hosted Linux agent
●● Push an image to Azure Container Registry
●● Deploy a Docker image as a container to Azure App Service by using Azure DevOps
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions15
15 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
533
Review Question 2
You are designing a multi-stage Dockerfile. How can one stage refer to another stage within the Dockerfile?
Review Question 3
What is the line continuation character in Dockerfiles?
Review Question 4
When the Open Container Initiative defined a standard container image file format, which format did they
choose as a starting point?
534
Answers
You are reviewing an existing Dockerfile. How would you know if it's a multi-stage Dockerfile?
Multi-stage Docker files are characterized by containing more than one starting point provided as FROM
instructions.
You are designing a multi-stage Dockerfile. How can one stage refer to another stage within the Docker-
file?
The FROM clause in a multi-stage Dockerfile can contain an alias via an AS clause. The stages can refer to
each other by number or by the alias names.
Lines can be broken and continued on the next line of a Dockerfile by using the backslash character.
When the Open Container Initiative defined a standard container image file format, which format did they
choose as a starting point?
Module overview
Module overview
As most modern software developers can attest, containers have provided enginnering teams with
dramatically more flexibility for running cloud-native applications on physical and virtual infrastructure.
Containers package up the services comprising an application and make them portable across different
compute environments, for both dev/test and production use. With containers, it’s easy to quickly ramp
application instances to match spikes in demand. And because containers draw on resources of the host
OS, they are much lighter weight than virtual machines. This means containers make highly efficient use
of the underlying server infrastructure.
So far so good. But though the container runtime APIs are well suited to managing individual containers,
they’re woefully inadequate when it comes to managing applications that might comprise hundreds of
containers spread across multiple hosts. Containers need to be managed and connected to the outside
world for tasks such as scheduling, load balancing, and distribution, and this is where a container orches-
tration tool like Kubernetes comes into its own.
An open-source system for deploying, scaling, and managing containerized applications, Kubernetes
handles the work of scheduling containers onto a compute cluster and manages the workloads to ensure
they run as the user intended. Instead of bolting on operations as an afterthought, Kubernetes brings
software development and operations together by design. By using declarative, infrastructure-agnostic
constructs to describe how applications are composed, how they interact, and how they are managed,
Kubernetes enables an order-of-magnitude increase in operability of modern software systems.
Kubernetes was built by Google based on its own experience running containers in production, and it
surely owes much of its success to Google’s involvement. Today, it is open source but it is owned by the
Cloud Native Computing Foundation.
Because the Kubernetes platform is open-source and has so many supporters, it is growing rapidly
through contributions. Kubernetes marks a breakthrough for DevOps because it allows teams to keep
pace with the requirements of modern software development.
536
Learning objectives
After completing this module, students will be able to:
●● Deploy and configure a Managed Kubernetes cluster
537
There are several other container cluster orchestration technologies available such as Mespshpere DC/
OS1 and Docker Swarm2. Today though, all the industry interest appears to be in Kubernetes.
For more details about Kubernetes, go to Production-Grade Container Orchestration3 on the Kuber-
netes website.
AKS manages much of the Kubernetes resources for the end user, making it quicker and easier to deploy
and manage containerized applications without container orchestration expertise. It also eliminates the
burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on
demand without taking applications offline.
Azure AKS manages the following aspects of a Kubernetes cluster for you:
●● It manages critical tasks, such as health monitoring and maintenance, Kubernetes version upgrades,
and patching.
●● It performs simple cluster scaling.
●● It enables master nodes to be fully managed by Microsoft.
●● It leaves you responsible only for managing and maintaining the agent nodes.
●● It ensures master nodes are free, and you only pay for running agent nodes.
If you were manually deploying Kubernetes, you would need to pay for the resources for the master
nodes.
1 https://mesosphere.com/product/
2 https://www.docker.com/products/orchestration
3 https://kubernetes.io/
538
Cluster master
When you create an AKS cluster, a cluster master is automatically created and configured. This cluster
master is provided as a managed Azure resource abstracted from the user. There is no cost for the cluster
master, only the nodes that are part of the AKS cluster.
The cluster master includes the following core Kubernetes components:
●● kube-apiserver. The API server is how the underlying Kubernetes APIs are exposed. This component
provides the interaction for management tools such as kubectl or the Kubernetes dashboard.
●● etcd. To maintain the state of your Kubernetes cluster and configuration, the highly available etcd is a
key value store within Kubernetes.
●● kube-scheduler. When you create or scale applications, the Scheduler determines what nodes can run
the workload, and starts them.
●● kube-controller-manager. The Controller Manager oversees several smaller controllers that perform
actions such as replicating pods and managing node operations.
Nodes of the same configuration are grouped together into node pools. A Kubernetes cluster contains
one or more node pools. The initial number of nodes and size are defined when you create an AKS
cluster, which creates a default node pool. This default node pool in AKS contains the underlying VMs
that run your agent nodes.
Pods
Kubernetes uses pods to run an instance of your application. A pod represents a single instance of your
application. Pods typically have a 1:1 mapping with a container, although there are advanced scenarios
where a pod might contain multiple containers. These multi-container pods are scheduled together on
the same node and allow containers to share related resources.
When you create a pod, you can define resource limits to request a certain amount of CPU or memory
resources. The Kubernetes Scheduler attempts to schedule the pods to run on a node with available
resources to meet the request. You can also specify maximum resource limits that prevent a given pod
from consuming too much compute resource from the underlying node.
✔️ Note: A best practice is to include resource limits for all pods to help the Kubernetes Scheduler under-
stand what resources are needed and permitted.
A pod is a logical resource, but the container (or containers) is where the application workloads run. Pods
are typically ephemeral, disposable resources. Therefore, individually scheduled pods miss some of the
high availability and redundancy features Kubernetes provides. Instead, pods are usually deployed and
managed by Kubernetes controllers, such as the Deployment controller.
Kubernetes networking
Kubernetes pods have limited lifespan and are replaced whenever new versions are deployed. Settings
such as the IP address change regularly, so interacting with pods by using an IP address is not advised.
Therefore, Kubernetes services exist. To simplify the network configuration for application workloads,
Kubernetes uses Services to logically group a set of pods together and provide network connectivity.
Kubernetes Service is an abstraction that defines a logical set of pods, combined with a policy that
describes how to access them. Where pods have a shorter lifecycle, services are usually more stable and
are not affected by container updates. This means that you can safely configure applications to interact
with pods by using services. The service redirects incoming network traffic to its internal pods. Services
can offer more specific functionality, based on the service type that you specify in the Kubernetes
deployment file.
If you do not specify the service type, you will get the default type, which is ClusterIP. This means that
your services and pods will receive virtual IP addresses that are only accessible from within the cluster.
Although this might be a good practice for containerized back-end applications, it might not be what you
want for applications that need to be accessible from the internet. You need to determine how to config-
ure your Kubernetes cluster to make those applications and pods accessible from the internet.
Services
The following Service types are available:
●● ClusterIP This service creates an internal IP address for use within the AKS cluster. However, it's good
for internal-only applications that support other workloads within the cluster.
540
●● NodePort This service creates a port mapping on the underlying node, which enables the application
to be accessed directly with the node IP address and port.
●● LoadBalancer This service creates an Azure Load Balancer resource, configures an external IP address,
and connects the requested pods to the load balancer backend pool. To allow customers traffic to
reach the application, load balancing rules are created on the desired ports.
Ingress controllers
When you create a Load Balancer–type Service, an underlying Azure Load Balancer resource is created.
The load balancer is configured to distribute traffic to the pods in your service on a given port. The Load
Balancer only works at layer 4. The Service is unaware of the actual applications and can't make any
additional routing considerations.
Ingress controllers work at layer 7 and can use more intelligent rules to distribute application traffic. A
common use of an Ingress controller is to route HTTP traffic to different applications based on the
inbound URL.
541
There are different implementations of the Ingress Controller concept. One example is the Nginx
Ingress Controller, which translates the Ingress Resource into a nginx.conf file. Other examples are
the ALB Ingress Controller (AWS) and the GCE Ingress Controllers (Google Cloud),
which make use of cloud native resources. Using the Ingress setup within Kubernetes makes it possible to
easily switch the reverse proxy implementation so that your containerized workload leverages the most
out of the cloud platform on which it is running.
Deployment units
Kubernetes uses the term pod to package applications. A pod is a deployment unit, and it represents a
running process on the cluster. It consists of one or more containers, and configuration, storage resourc-
es, and networking support. Pods are usually created by a controller, which monitors it and provides
self-healing capabilities at the cluster level.
Pods are described by using YAML or JSON. Pods that work together to provide functionality are grouped
into services to create microservices. For example, a front-end pod and a back-end pod could be grouped
into one service.
You can deploy an application to Kubernetes by using the kubectl CLI, which can manage the cluster. By
running kubectl on your build agent, it's possible to deploy Kubernetes pods from Azure DevOps. It's
also possible to use the management API directly. There is also a specific Kubernetes task called Deploy
To Kubernetes that is available in Azure DevOps. More information about this will be covered in the
upcoming demonstration.
Continuous delivery
To achieve continuous delivery, the build-and-release pipelines are run for every check-in on the Source
repository.
542
Prerequisites
●● Use the cloud shell.
●● You require an Azure subscription to be able to perform these steps. If you don't have one, you can
create it by following the steps outlined on the Create your Azure free account today4 page.
Steps
1. Open Azure Cloud Shell by going to https://shell.azure.com or using the Azure Portal and selecting
Bash as the environment option.
4 https://azure.microsoft.com/en-us/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_
campaign=visualstudio
543
After a few minutes, the command completes and returns JSON-formatted information about the cluster.
4. To manage a Kubernetes cluster, you use kubectl, the Kubernetes command-line client. If you use
Azure Cloud Shell, kubectl is already installed. To install kubectl locally, use the following com-
mand:
az aks install-cli
5. To configure kubectl to connect to your Kubernetes cluster, use the az aks get-credentials
command. This command downloads credentials and configures the Kubernetes CLI to use them:
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
6. Verify the connection to your cluster by running the following command. Make sure that the status of
the node is Ready:
kubectl get nodes
7. Create a file named azure-vote.yaml, and then copy it into the following YAML definition. If you use
the Azure Cloud Shell, you can create this file using vi or nano as if working on a virtual or physical
system:
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
544
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: microsoft/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
You should receive output showing the Deployments and Services were created successfully after it runs
as per the below.
545
9. When the application runs, a Kubernetes service exposes the application front end to the internet. This
process can take a few minutes to complete. To monitor progress run the command.
kubectl get service azure-vote-front --watch
10. Initially the EXTERNAL-IP for the azure-vote-front service is shown as pending.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
azure-vote-front LoadBalancer 10.0.37.27 < pending > 80:30572/TCP
6s
11. When the EXTERNAL-IP address changes from pending to an actual public IP address, use CTRL-C to
stop the kubectl watch process. The following example output shows a valid public IP address
assigned to the service:
azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP
2m
12. To see the Azure Vote app in action, open a web browser to the external IP address of your service.
Monitor health and logs. When the AKS cluster was created, Azure Monitor for containers was enabled to
capture health metrics for both the cluster nodes and pods. These health metrics are available in the
Azure portal. To see current status, uptime, and resource usage for the Azure Vote pods, complete the
following steps in the Azure portal:
13. Open a web browser to the Azure portal https://portal.azure.com.
14. Select your resource group, such as myResourceGroup, then select your AKS cluster, such as myAKS-
Cluster.
546
19. To see logs for the azure-vote-front pod, select the View container logs link on the right-hand side of
the containers list. These logs include the stdout and stderr streams from the container.
✔️ Note: If you are not continuing to use the Azure resources, remember to delete them to avoid
incurring costs.
547
Continuous deployment
In Kubernetes you can update the service by using a rolling update. This will ensure that traffic to a
container is first drained, then the container is replaced, and finally, traffic is sent back again to the
container. In the meantime, your customers won't see any changes until the new containers are up and
running on the cluster. The moment they are, new traffic is routed to the new containers and stopped to
the old containers. Running a rolling update is easy to do with the following command:
kubectl apply -f nameofyamlfile
The YAML file contains a specification of the deployment. The apply command is convenient because it
makes no difference whether the deployment was already on the cluster. This means that you can always
use the exact same steps regardless of whether you are doing an initial deployment or an update to an
existing deployment.
When you change the name of the image for a service in the YAML file, Kubernetes will apply a rolling
update, considering the minimum number of running containers you want and how many at a time it is
allowed to stop. The cluster will take care of updating the images without downtime, assuming that your
application container is built stateless.
Updating images
After you've successfully containerized your application, you'll need to ensure that you update your
image regularly. This entails creating a new image for every change you make in your own code and
ensuring that all layers receive regular patching.
A large part of a container image is the base OS layer, which contains the elements of the operating
system that are not shared with the container host.
The base OS layer gets updated frequently. Other layers, such as the IIS layer and ASP.NET layer in the
image, are also updated. Your own images are built on top of these layers, and it's up to you to ensure
that they incorporate those updates.
Fortunately, the base OS layer consists of two separate images: a larger base layer and a smaller update
layer. The base layer changes less frequently than the update layer. Updating your image's base OS layer
is usually a matter of getting the latest update layer.
548
If you're using a Docker file to create your image, patching layers should be done by explicitly changing
the image version number using the following commands:
```yml
FROM microsoft/windowsservercore:10.0.14393.321
RUN cmd /c echo hello world
```
into
```yml
FROM microsoft/windowsservercore:10.0.14393.693
RUN cmd /c echo hello world
```
When you build this Docker file, it now uses version 10.0.14393.693 of the image microsoft/windows-
servercore.
Latest tag
Don't be tempted to rely on the latest tag. To define repeatable custom images and deployments, you
should always be explicit about the base image versions that you are using. Also, just because an image is
tagged as the latest doesn't mean that it is the latest. The owner of the image needs to ensure this.
✔️ Note: The last two segments of the version number of Windows Server Core and Nano images will
match the build number of the operating system inside.
549
Kubernetes tooling
kubectl
kubectl is a command-line tool for running commands against Kubernetes clusters. Deploy applications,
manage cluster resources.
Some common commands for kubectl are shown below:
Common Commands
annotate Add/update annotations for resources
apply Apply configuration changes
autoscale Scale pods managed by a replication controller
certificate Modify certificate resources
cluster-info Display endpoint information about master and
services
config Modify kubeconfig files
cp Copies files to/from containers
describe Show detailed state about resources
exec Execute a command against a container
label Add/update labels for resources
logs Print the logs for a container
run Run an image on a container
For more information on kubectl commands and resource types, see: Overview of kubectl5
Helm
Helm is a package manager for Kubernetes. It makes it easier to package, configure, and deploy applica-
tions and services.
helm (lower-case) is the command line tool that provides a user-interface to the functionality of Helm.
tiller was a server-side component that executed Helm packages. From Helm 3 onwards, tiller will no
longer be required.
Helm packages are called charts and are implemented in YAML. There are public and private repositories
for Helm charts.
For more information on Helm, see: Helm6
5 https://kubernetes.io/docs/reference/kubectl/overview/
6 https://helm.sh/
550
You can deploy containerized micro-service-based applications to local minikube clusters or to Azure
Kubernetes clusters and debug live applications running in containers on the clusters.
The extension allows allows you to browse and manage Kubernetes clusters from within VS Code and
helps to streamline Kubernetes development.
For more information on the extension, see: Working with Kubernetes in VS Code7
For more information on local minikube clusters, see: Using Minikube to Create a Cluster8
7 https://code.visualstudio.com/docs/azure/kubernetes
8 https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/
551
The scenario
We will be building a Vehicle microservice which provides CRUD operations for sending vehicle data to a
CosmosDB document store. The sample micro-service needs to interact with the Configuration stores to
get values such as connectionstring, database name, collection name, etc. We interact with Azure Key
Vault for this purpose. Additionally, the application needs the Authentication token for Azure Key Vault
itself, these details along with other Configuration will be stored in Kubernetes.
Responsibilities
The Ops engineer/scripts are the Configuration Custodian and they are the only ones who work in the
outer loop to manage all the configuration. They would have CI/CD scripts that would inject these
configurations or use popular framework tools to enable the insertion during the build process.
552
Integration
The Vehicle API is the ASP.NET Core 2.0 application and is the Configuration Consumer here; the consum-
er is interested in getting the values without really worrying about what the value is and which environ-
ment it belongs to. The ASP.NET Core framework provides excellent support for this through its Configu-
ration extensibility support. You can add as many providers as you like and they can be bound an
IConfiguration object which provides access to all the configuration. In the below code snippet, we
provide the configuration to be picked up from environment variables instead of a configuration file. The
ASP.NET Core 2.0 framework also supports extensions to include Azure Key Vault as a configuration
provider, and under the hood, the Azure Key Vault client allows for secure access to the values required
by the application.
// add the environment variables to config
config.AddEnvironmentVariables();
AzureKeyVault is the Secret Store for all the secrets that are application specific. It allows for the creation
of these secrets and also managing the lifecycle of them. It is recommended that you have a separate
Azure KeyVault per environment to ensure isolation. The following command can be used to add a new
configuration into KeyVault:
#Get a list of existing secrets
az keyvault secret list --vault-name -o table
The clientsecret is the only piece of secure information we store in Kubernetes; all the application specific
secrets are stored in Azure KeyVault. This is comparatively safer since the above scripts do not need to go
in the same git repo, so we don’t check them in by mistake, and can be managed separately. We still
control the expiry of this secret using Azure KeyVault, so the Security engineer still has full control over
access and permissions.
1. Injecting Values into the Container: During runtime, Kubernetes will automatically push the above
values as environment variables for the deployed containers, so the system does not need to worry
about loading them from a configuration file. The Kubernetes configuration for the deployment looks
like below. As you would notice, we only provide a reference to the ConfigMaps and Secret that have
been created instead of punching in the actual values.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: vehicle-api-deploy #name for the deployment
labels:
app: vehicle-api #label that will be used to map the service, this tag
is very important
spec:
replicas: 1
selector:
matchLabels:
app: vehicle-api #label that will be used to map the service, this
tag is very important
template:
metadata:
labels:
app: vehicle-api #label that will be used to map the service, this
tag is very important
spec:
containers:
- name: vehicleapi #name for the container configuration
image: <yourdockerhub>/<youdockerimage>:<youdockertagversion> #
**CHANGE THIS: the tag for the container to be deployed
imagePullPolicy: Always #getting latest image on each deployment
ports:
- containerPort: 80 #map to port 80 in the docker container
env: #set environment variables for the docker container using
configMaps and Secret Keys
- name: clientId
valueFrom:
configMapKeyRef:
name: clientid
key: clientId
- name: kvuri
valueFrom:
configMapKeyRef:
name: kvuri
key: kvuri
- name: vault
valueFrom:
554
configMapKeyRef:
name: vault
key: vault
- name: clientsecret
valueFrom:
secretKeyRef:
name: clientsecret
key: clientSecret
imagePullSecrets: #secret to get details of private repo, disable
this if using public docker repo
- name: regsecret
livenessProbe:
httpGet:
path: /health-check
port: liveness-port
failureThreshold: 2
periodSeconds: 15
startupProbe:
httpGet:
path: /health-check
port: liveness-port
failureThreshold: 40
periodSeconds: 15
555
For more information on Kubernetes readiness probes, see here: Configure Liveness, Readiness and
Startup Probes9
9 https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
556
Lab
Lab 16: Deploying a multi-container application
to Azure Kubernetes Services
Lab overview
Azure Kubernetes Service (AKS)10 is the quickest way to use Kubernetes on Azure. Azure Kubernetes
Service (AKS) manages your hosted Kubernetes environment, making it straightforward to deploy and
manage containerized applications without requiring container orchestration expertise. It also enhances
agility, scalability, and availability of your containerized workloads. Azure DevOps further streamlines AKS
operations by providing continuous build and deployment capabilities.
In this lab, you will use Azure DevOps to deploy a containerized ASP.NET Core web application My-
HealthClinic (MHC) to an AKS cluster.
Objectives
After you complete this lab, you will be able to:
●● Create an Azure DevOps team project with a .NET Core application using the Azure DevOps Demo
Generator tool.
●● Use Azure CLI to create an Azure Container registry (ACR), an AKS cluster and an Azure SQL database
●● Configure containerized application and database deployment by using Azure DevOps
●● Use Azure DevOps pipelines to build to automatically deploy containerized applications
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions11
10 https://azure.microsoft.com/en-us/services/kubernetes-service/
11 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
557
Review Question 2
Kubernetes CLI is called _________.
HELM
ACI
AKS
KUBECTL
Review Question 3
For workloads running in AKS Kubernetes Web Dashboard allows you to view _______________________. Select
all that apply.
Config Map & Secrets
Logs
Storage
Azure Batch Metrics
Review Question 4
Pods can be described using which of the following languages? Select all that apply.
JSON
XML
PowerShell
YAML
558
Answers
Review Question 1
Is this statement true or false?
Azure Policy natively integrates with AKS, allowing you to enforce rules across multiple AKS clusters.
Track, validate and configure nodes, pods, and container images for compliance.
■■ True
False
Review Question 2
Kubernetes CLI is called _________.
HELM
ACI
AKS
■■ KUBECTL
Review Question 3
For workloads running in AKS Kubernetes Web Dashboard allows you to view _______________________.
Select all that apply.
■■ Config Map & Secrets
■■ Logs
■■ Storage
Azure Batch Metrics
Review Question 4
Pods can be described using which of the following languages? Select all that apply.
■■ JSON
XML
PowerShell
■■ YAML
Module 17 Implementing Feedback for Devel-
opment Teams
Module overview
Module overview
When you go shopping for a car, do you refuse to take it on a test drive? Likely not. No matter how much
a salesperson hypes up a car, you must feel for yourself how smoothly it drives and how easy it brakes.
You also need to drive the car on a real road and in real conditions.
Software is the same way. Just deploying code into production and doing a health check is no longer
good enough. We’re now looking beyond what we used to consider “done”, and instead, continue to
monitor how it runs. Getting feedback about what happens after the software is deployed to stay com-
petitive and make our system better is essential.
Feedback loops are the essence of any process improvement and DevOps is no exception. The goal of
almost any process improvement initiative is to shorten and amplify feedback loops so necessary correc-
tions can be continually made.
A feedback loop in general systems uses its output as one of its inputs. The right feedback loop must
bear these characteristics – Faster, Relevant, Actionable and Accessible. Engineering teams need to set
rules for acting on different feedback and own the complete code quality checked in by engineering
teams. Feedback is fundamental not only to DevOps practice but throughout the SDLC process.
A customized feedback loop and process is necessary for every organization to act as a control center to
alter the course early when things go wrong. As you could guess by now, every feedback loop should at
least allow teams to capture feedback, both technical and system feedback, raise the visibility of this
feedback and allow the teams to take this feedback actionable.
Learning objectives
After completing this module, students will be able to:
●● Configure crash report integration for client applications
●● Develop monitoring and status dashboards
560
Definitions
The easiest way to define the inner loop is the iterative process that a developer performs when they
write, build, and debug code. There are other things that a developer does, but this is the tight set of
steps that are performed over and over before they share their work with their team or the rest of the
world.
Exactly what goes into an individual developer's inner loop will depend a great deal on the technologies
that they are working with, the tools being used and of course their own preferences. If I were working on
a library, my inner loop would include coding, build, test execution & debugging with regular commits to
my local Git repository. On the other hand, if I were doing some web front-end work I would probably be
optimized around hacking on HTML & JavaScript, bundling and refreshing the browser (followed by
regular commits).
562
Most codebases are comprised of multiple moving parts and so the definition of a developer's inner loop
on any single codebase might alternate depending on what is being worked on.
Loop optimization
Having categorized the steps within the loop it is now possible to make some general statements:
●● You want to execute the loop as fast as possible and for the total loop execution time to be propor-
tional to the changes being made.
●● You want to minimize the time feedback collection takes but maximize the quality of the feedback
that you get.
●● You want to minimize the tax you pay by eliminating it where it isn't necessary on any run through the
loop (can you defer some operations until you commit for example).
●● As new code and more complexity is added to any codebase the amount of outward pressure to
increase the size of the inner loop also increases (more code means more tests which in turn means
more execution time and a slow execution of the inner loop).
If you have ever worked on a large monolithic codebase it is possible to get into a situation where even
small changes require a disproportionate amount of time to execute the feedback collection steps of the
inner loop. This is a problem, and you should fix it.
There are several things that a team can do to optimize the inner loop for larger codebases:
1. Only build and test what was changed.
2. Cache intermediate build results to speed up full builds.
3. Break up the codebase into small units and share binaries.
How you tackle each one of those is probably a blog post. At Microsoft, for some of our truly massive
monolithic codebases we are investing quite heavily in #1 and #2 - but #3 requires a special mention
because it can be a double-edged sword and if done incorrectly and can have the opposite of the desired
impact.
564
Tangled loops
To understand the problem, we need to look beyond the inner loop. Let's say that our monolithic code-
base has an application specific framework which does a lot of heavy lifting. It would be tempting to
extract that framework into a set of packages.
To do this you would pull that code into a separate repository (optional, but this is generally the way it is
done), then setup a separate CI/CD pipeline that builds and publishes the package. This separate build
and release pipeline would also be fronted by a separate pull-request process to allow for changes to be
inspected before the code is published.
When someone needs to change this framework code, they clone down the repository, make their
changes (a separate inner loop) and submit a PR which is the transition of the workflow from the inner
loop to the outer loop. The framework package would then be available to be pulled into dependent
applications (in this case the monolith).
565
Initially things might work out well, however at some point in the future it is likely that you'll want to
develop a new feature in the application that requires extensive new capabilities to be added to the
framework. This is where teams that have broken their codebases up in sub-optimal ways will start to feel
pain.
If you are having to co-evolve code in two separate repositories where a binary/library dependency is
present, then you are going to experience some friction. In loop terms - the inner loop of the original
codebase now (temporarily at least) includes the outer loop of the framework code that was previously
broken out.
Outer loops include a lot of tax such as code reviews, scanning passes, binary signing, release pipelines
and approvals. You don't want to pay that every time you've added a method to a class in the framework
and now want to use it in your application.
What generally ends up happening next is a series of local hacks by the developer to try and stitch the
inner loops together so that they can move forward efficiently - but it gets messy quick and you must pay
that outer loop tax at some point.
566
This isn't to say that breaking code up into separate packages is an inherently bad thing - it can work bril-
liantly; you just need to make those incisions carefully.
Closing thoughts
There is no silver bullet solution that will ensure that your inner loop doesn't start slowing down, but it is
important to understand when it starts happening, what the cause is and work to address it.
Decisions such as how you build, test and debug, to the actual architecture itself will all impact how
productive developers are. Improving one aspect will often cause issues in another.
Continuous monitoring refers to the process and technology required to incorporate monitoring across
each phase of your DevOps and IT operations lifecycles. It helps to continuously ensure the health,
performance, and reliability of your application and infrastructure as it moves from development to
production. Continuous monitoring builds on the concepts of Continuous Integration and Continuous
Deployment (CI/CD) which help you develop and deliver software faster and more reliably to provide
continuous value to your users.
Azure Monitor1 is the unified monitoring solution in Azure that provides full-stack observability across
applications and infrastructure in the cloud and on-premises. It works seamlessly with Visual Studio and
Visual Studio Code2 during development and test and integrates with Azure DevOps3 for release
management and work item management during deployment and operations. It even integrates across
the ITSM and SIEM tools of your choice to help track issues and incidents within your existing IT process-
es.
1 https://docs.microsoft.com/en-us/azure/azure-monitor/overview
2 https://visualstudio.microsoft.com/
3 https://docs.microsoft.com/en-us/azure/devops/user-guide/index
567
This article describes specific steps for using Azure Monitor to enable continuous monitoring throughout
your workflows. It includes links to other documentation that provides details on implementing different
features.
4 https://docs.microsoft.com/en-us/azure/azure-monitor/continuous-monitoring#enable-monitoring-for-all-your-applications
5 https://docs.microsoft.com/en-us/azure/devops-project/overview
6 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-vsts-continuous-monitoring
7 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-monitor-performance-live-website-now
8 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview
9 https://docs.microsoft.com/en-us/azure/application-insights/quick-monitor-portal
10 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-java-quick-start
11 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-nodejs-quick-start
12 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-platforms
13 https://docs.microsoft.com/en-us/azure/azure-monitor/continuous-monitoring#enable-monitoring-for-your-entire-infrastructure
14 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-sources
15 https://docs.microsoft.com/en-us/azure/azure-monitor/insights/vminsights-overview
16 https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-overview
17 https://docs.microsoft.com/en-us/azure/azure-monitor/insights/solutions-inventory
568
Infrastructure as code18 is the management of infrastructure in a descriptive model, using the same
versioning as DevOps teams use for source code. It adds reliability and scalability to your environment
and allows you to leverage similar processes that used to manage your applications.
●● Use Resource Manager templates19 to enable monitoring and configure alerts over a large set of
resources.
●● Use Azure Policy20 to enforce different rules over your resources. This ensures that those resources
stay compliant with your corporate standards and service level agreements.
18 https://docs.microsoft.com/en-us/azure/devops/learn/what-is-infrastructure-as-code
19 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/template-workspace-configuration
20 https://docs.microsoft.com/en-us/azure/governance/policy/overview
21 https://docs.microsoft.com/en-us/azure/azure-monitor/continuous-monitoring#combine-resources-in-azure-resource-groups
22 https://docs.microsoft.com/en-us/azure/azure-monitor/insights/resource-group-insights
23 https://docs.microsoft.com/en-us/azure/azure-monitor/continuous-monitoring#ensure-quality-through-continuous-deployment
24 https://docs.microsoft.com/en-us/azure/devops/pipelines
25 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-separate-resources
26 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-charts
27 https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/cross-workspace-query
569
Continuously optimize40
Monitoring is one of the fundamental aspects of the popular Build-Measure-Learn philosophy, which
recommends continuously tracking your KPIs and user behavior metrics and then striving to optimize
them through planning iterations. Azure Monitor helps you collect metrics and logs relevant to your
business and to add new data points in the next deployment as required.
●● Use tools in Application Insights to track end-user behavior and engagement41.
●● Use Impact Analysis42 to help you prioritize which areas to focus on to drive to important KPIs.
28 https://docs.microsoft.com/en-us/azure/azure-monitor/continuous-monitoring#create-actionable-alerts-with-actions
29 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-overview
30 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-dynamic-thresholds
31 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/action-groups#create-an-action-group-by-using-the-azure-portal
32 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-overview
33 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/activity-log-alerts-webhook
34 https://docs.microsoft.com/en-us/azure/automation/automation-webhooks
35 https://docs.microsoft.com/en-us/connectors/custom-connectors/create-webhook-trigger
36 https://docs.microsoft.com/en-us/azure/azure-monitor/learn/tutorial-autoscale-performance-schedule
37 https://docs.microsoft.com/en-us/azure/azure-monitor/continuous-monitoring#prepare-dashboards-and-workbooks
38 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-tutorial-dashboards
39 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-usage-workbooks
40 https://docs.microsoft.com/en-us/azure/azure-monitor/continuous-monitoring#continuously-optimize
41 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-tutorial-users
42 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-usage-impact
570
In this tutorial we'll focus on the Log Analytics part of Azure Monitor. We'll learn how to:
●● Set up Log Analytics workspace.
●● Connect virtual machines into a log analytics workspace.
●● Configure Log Analytics workspace to collect custom performance counters.
●● Analyze the telemetry using Kusto Query Language.
Getting started
1. To follow along you'll need a resource group with one or more virtual machines that you have RDP
access to.
2. Log into Azure Shell43. Executing the command below will create a new resource group and create a
new log analytics workspace. Take a note of the workspaceid of the log analytics workspace as we'll be
using it again.
$ResourceGroup = "azwe-rg-devtest-logs-001"
$WorkspaceName = "azwe-devtest-logs-01"
$Location = "westeurope"
43 http://shell.azure.com/powershell
571
# Add solutions
foreach ($solution in $Solutions) {
Set-AzOperationalInsightsIntelligencePack -ResourceGroupName $Resource-
Group -WorkspaceName $WorkspaceName -IntelligencePackName $solution -Ena-
bled $true
}
# Windows Event
New-AzOperationalInsightsWindowsEventDataSource -ResourceGroupName $Re-
sourceGroup -WorkspaceName $WorkspaceName -EventLogName "Application"
-CollectErrors -CollectWarnings -Name "Example Application Event Log"
4. Map existing virtual machines with the Log Analytics workspace. The query below uses the wok-
spaceid and workspace secret key of the log analytics workspace to install the Microsoft Enterprise
Cloud Monitoring extension onto an existing VM.
572
5. Run the script to configure the below listed performance counters to be collected from the virtual
machine.
#Login-AzureRmAccount
#Instance
##################################
$InstanceNameAll = "*"
$InstanceNameTotal = '_Total'
#Objects
##################################
$ObjectCache = "Cache"
$ObjectLogicalDisk = "LogicalDisk"
$ObjectMemory = "Memory"
$ObjectNetworkAdapter = "Network Adapter"
$ObjectNetworkInterface = "Network Interface"
$ObjectPagingFile = "Paging File"
$ObjectProcess = "Process"
$ObjectProcessorInformation = "Processor Information"
$ObjectProcessor = "Processor"
$ObjectSQLAgentAlerts = "SQLAgent:Alerts"
$ObjectSQLAgentJobs = "SQLAgent:Jobs"
$ObjectSQLAgentStatistics = "SQLAgent:Statistics"
$ObjectSystem = "System"
#Counters
#########################################################
$CounterCache = "Copy Read Hits %"
573
$CounterLogicalDisk =
"% Free Space" `
,"Avg. Disk sec/Read" `
,"Avg. Disk sec/Transfer" `
,"Avg. Disk sec/Write" `
,"Current Disk Queue Length" `
,"Disk Read Bytes/sec" `
,"Disk Reads/sec" `
,"Disk Transfers/sec" `
,"Disk Writes/sec"
$CounterMemory =
"% Committed Bytes In Use" `
,"Available MBytes" `
,"Page Faults/sec" `
,"Pages Input/sec" `
,"Pages Output/sec" `
,"Pool Nonpaged Bytes"
$CounterNetworkAdapter =
"Bytes Received/sec" `
,"Bytes Sent/sec"
$CounterPagingFile =
"% Usage" `
,"% Usage Peak"
$CounterProcessorInformation =
"% Interrupt Time" `
,"Interrupts/sec"
#########################################################
$global:number = 1 #Name parameter needs to be unique that why we will use
number ++ in fuction
574
#########################################################
-Instance $InstanceNameAll
AddPerfCounters -PerfObject $ObjectCache -PerfCounter $CounterCache -In-
stance $InstanceNameAll
6. To generate some interesting performance statistics. Download HeavyLoad utility44 (a free load
testing utility) and run this on the virtual machine to simulate high CPU, Memory and IOPS consump-
tion.
How it works
1. Log Analytics works by running the Microsoft Monitoring Agent service on the machine. The service
locally captures and buffers the events and pushes them securely out to the Log Analytics workspace
in Azure.
2. Log into the virtual machine and navigate to the C:\Program Files\Microsoft Monitoring Agent\MMA
and open control panel. This will show you the details of the log analytics workspace connected. You
also have the option of adding multiple log analytics workspaces to publish the log data into multiple
workspaces.
Summary
So far, we've created a log analytics workspace in a resource group. The log analytics workspace has been
configured to collect performance counters, event logs and IIS Logs. A virtual machine has been mapped
to the log analytics workspace using the Microsoft Enterprise cloud monitoring extension. HeavyLoad has
been used to simulate high CPU, memory and IOPS on the virtual machine.
In the next lessons, we will continue by querying the log analytics data.
44 https://www.jam-software.com/heavyload/
45 https://docs.microsoft.com/en-us/azure/data-explorer/kusto/concepts/
576
Walkthrough
Note: This walkthrough continues the previous lesson on Azure Log Analytics and the walkthrough
started within it.
1. Log into Azure Portal46 and navigate to the log analytics workspace. From the left blade in the log
analytics workspace click Logs. This will open the Logs window, ready for you to start exploring all the
datapoints captured into the workspace.
2. To query the logs we'll need to use the Kusto Query Language. Run the query below to list the last
heartbeat of each machine connected to the log analytics workspace.
// Last heartbeat of each computer
// Show the last heartbeat sent by each computer
Heartbeat
| summarize arg_max(TimeGenerated, *) by Computer
4. Show a count of the data points collected in the last 24 hours. In the result below, you can see we
have 66M data points that we are able to query against in near real time to analyse and correlate
insights.
5. Run the query below to generate the max CPU Utilization trend over the last 24 hours, aggregated at
a granularity of 1 min. Render the data as timechart.
Perf
| where ObjectName == "Processor" and InstanceName == "_Total"
| summarize max(CounterValue) by Computer, bin(TimeGenerated, 1m)
| render timechart
46 https://portal.azure.com
577
6. Run the query below to see all the processes running on that machine that are contributing to the
CPU Utilization. Render the data in a pie chart.
Perf
| where ObjectName contains "process"
and InstanceName !in ("_Total", "Idle")
and CounterName == "% Processor Time"
| summarize avg(CounterValue) by InstanceName, CounterName, bin(TimeGener-
ated, 1m)
| render piechart
There's more
This tutorial has introduced you to the basic concepts of Log Analytics and how to get started with the
basics. We've only scratched the surface of what's possible with Log Analytics. I would encourage you to
try out the advanced tutorials available for Log Analytics on Microsoft Docs47
Application Insights
You install a small instrumentation package in your application and set up an Application Insights
resource in the Microsoft Azure portal. The instrumentation monitors your app and sends telemetry data
to the portal. (The application can run anywhere - it doesn't have to be hosted in Azure.)
You can instrument not only the web service application, but also any background components, and the
JavaScript in the web pages themselves.
47 https://docs.microsoft.com/en-us/azure/azure-monitor/
578
In addition, you can pull in telemetry from the host environments such as performance counters, Azure
diagnostics, or Docker logs. You can also set up web tests that periodically send synthetic requests to
your web service.
All these telemetry streams are integrated in the Azure portal, where you can apply powerful analytic and
search tools to the raw data.
48 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview?toc=/azure/azure-monitor/toc.json#whats-the-
overhead
49 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview?toc=/azure/azure-monitor/toc.json#what-does-
application-insights-monitor
579
●● AJAX calls from web pages - rates, response times, and failure rates.
●● User and session counts.
●● Performance counters from your Windows or Linux server machines, such as CPU, memory, and
network usage.
●● Host diagnostics from Docker or Azure.
●● Diagnostic trace logs from your app - so that you can correlate trace events with requests.
●● Custom events and metrics that you write yourself in the client or server code, to track business
events such as items sold or games won.
Application map52
The components of your app, with key metrics and alerts.
Profiler53
Inspect the execution profiles of sampled requests.
50 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-proactive-diagnostics
51 https://docs.microsoft.com/en-us/azure/azure-monitor/app/alerts
52 https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-map
53 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-profiler
580
Usage analysis54
Analyze user segmentation and retention.
Dashboards
Mash up data from multiple resources and share with others. Great for multi-component applications,
and for continuous display in the team room.
54 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-usage-overview
55 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-diagnostic-search
581
Analytics
Answer tough questions about your app's performance and usage by using this powerful query language.
Visual Studio
See performance data in the code. Go to code from stack traces.
Snapshot debugger
Debug snapshots sampled from live operations, with parameter values.
Power BI
Integrate usage metrics with other business intelligence.
582
REST API
Write code to run queries over your metrics and raw data.
Continuous export
Bulk export of raw data to storage as soon as it arrives.
Detect, Diagnose
When you receive an alert or discover a problem:
●● Assess how many users are affected.
●● Correlate failures with exceptions, dependency calls and traces.
56 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-monitor-web-app-availability
57 https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-dashboards
58 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-live-stream
583
Get started
Application Insights is one of the many services hosted within Microsoft Azure, and telemetry is sent
there for analysis and presentation. So, before you do anything else, you'll need a subscription to Micro-
soft Azure60. It's free to sign up, and if you choose the basic pricing plan61 of Application Insights,
there's no charge until your application has grown to have substantial usage. If your organization already
has a subscription, they could add your Microsoft account to it.
There are several ways to get started. Begin with whichever works best for you. You can add the others
later.
At run time
Instrument your web app on the server. Avoids any update to the code. You need admin access to your
server.
●● IIS on-premises or on a VM62
●● Azure web app or VM63
●● J2EE64
At development time
Add Application Insights to your code. Allows you to write custom telemetry and to instrument back-end
and desktop apps.
●● Visual Studio65 2013 update 2 or later.
●● Java66
●● Node.js
●● Other platforms67
●● Instrument your web pages68 for page view, AJAX, and other client-side telemetry.
59 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-usage-overview
60 https://azure.com/
61 https://azure.microsoft.com/pricing/details/application-insights/
62 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-monitor-performance-live-website-now
63 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-monitor-performance-live-website-now
64 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-java-live
65 https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net
66 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-java-get-started
67 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-platforms
68 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-javascript
584
●● Analyze mobile app usage69 by integrating with Visual Studio App Center.
●● Availability tests70 - ping your website regularly from our servers.
Getting started
1. To add Application Insights to your ASP.NET website, you need to:
●● Install Visual Studio 2019 for Windows with the following workloads:
●● ASP.NET and web development (Do not uncheck the optional components)
2. In Visual Studio create a new dotnet core project. Right click the project and from the context menu
69 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-mobile-center-quickstart
70 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-monitor-web-app-availability
585
(Depending on your Application Insights SDK version you may be prompted to upgrade to the latest SDK
release. If prompted, select Update SDK.)
3. From the Application Insights configuration screen, click Get started to start setting up App Insights.
4. Choose to set up a new resource group and select the location where you want the telemetry data to
be persisted.
586
Summary
So far, we've added App Insights in a dotnet core application. The Application Insights getting started
experience gives you the ability to create a new resource group in the desired location where the App
Insights instance gets created. The instrumentation key for the app insights instance is injected into the
application configuration automatically.
How to do it
1. Run your app with F5. Open different pages to generate some telemetry. In Visual Studio, you will see
a count of the events that have been logged.
587
2. You can see your telemetry either in Visual Studio or in the Application Insights web portal. Search
telemetry in Visual Studio to help you debug your app. Monitor performance and usage in the web
portal when your system is live. In Visual Studio, to view Application Insights data. Select Solution
Explorer > Connected Services > right-click Application Insights, and then click Search Live Telemetry.
In the Visual Studio Application Insights Search window, you will see the data from your application for
telemetry generated in the server side of your app. Experiment with the filters, and click any event to see
more detail.
588
3. You can also see telemetry in the Application Insights web portal (unless you chose to install only the
SDK). The portal has more charts, analytic tools, and cross-component views than Visual Studio. The
portal also provides alerts.
Open your Application Insights resource. Either sign into the Azure portal and find it there, or select
Solution Explorer > Connected Services > right-click Application Insights > Open Application Insights
Portal and let it take you there.
The portal opens on a view of the telemetry from your app.
How it works
Application Insights configures an unique key (called AppInsights Key) in your application. This key is
used by the Application Insights SDK to identify the Azure App Insights workspace the telemetry data
needs to be uploaded into. The SDK and the key are merely used to pump the telemetry data points out
of your application. The heavy lifting of data correlation, analysis and insights is done within Azure.
589
There's more
In this tutorial we learned how to get started by adding Application Insights into your dotnet core
application. App Insights offers a wide range of features. You can learn more about these at Start
Monitoring Your ASP.NET Core Web Application71.
71 https://docs.microsoft.com/en-us/azure/azure-monitor/learn/dotnetcore-quick-start
590
Crashes
Crashes are what happens when a runtime exception occurs from an unexpected event that terminates
the app. These are errors not handled by a try/catch block. When a crash occurs, App Center Crashes
records the state of the app and device and automatically generates a crash log. These logs contain
valuable information to help you fix the crash.
Grouping
App Center Diagnostics groups crashes and errors by similarities, such as reason for the issue and where
the issue occurred in the app. For each crash and error group, App Center displays the line of code that
failed, the class or method name, file name, line number, crash or error type and message for you to
better understand these groups at a glance. Select a group to view more information and access a list of
detailed issues reports and logs. This allows you to dive even deeper and use our feature set to better
understand your app's behavior during a crash or an error.
591
Attachments
In the App Center Diagnostics UI, you can attach, view, and download one binary and one text attach-
ment to your crash reports.
You can learn how to add attachments to your crash reports by reading the SDK Crashes documentation
for your Android72, iOS73, macOS74, React Native75, Xamarin76, and Apache Cordova77 apps.
To view and download the attachments, select a crash group, a specific device report and then click on
the attachments tab.
72 https://docs.microsoft.com/en-us/appcenter/sdk/crashes/android#add-attachments-to-a-crash-report
73 https://docs.microsoft.com/en-us/appcenter/sdk/crashes/ios#add-attachments-to-a-crash-report
74 https://docs.microsoft.com/en-us/appcenter/sdk/crashes/macos#add-attachments-to-a-crash-report
75 https://docs.microsoft.com/en-us/appcenter/sdk/crashes/react-native#add-attachments-to-a-crash-report
76 https://docs.microsoft.com/en-us/appcenter/sdk/crashes/xamarin#add-attachments-to-a-crash-report
77 https://docs.microsoft.com/en-us/appcenter/sdk/crashes/cordova#add-attachments-to-a-crash-report
592
78 https://docs.microsoft.com/en-us/appcenter/sdk/index
79 https://docs.microsoft.com/en-us/appcenter/sdk/analytics/android
80 https://docs.microsoft.com/en-us/appcenter/sdk/analytics/ios
81 https://docs.microsoft.com/en-us/appcenter/sdk/analytics/react-native
82 https://docs.microsoft.com/en-us/appcenter/sdk/analytics/xamarin
83 https://docs.microsoft.com/en-us/appcenter/sdk/getting-started/uwp
84 https://docs.microsoft.com/en-us/appcenter/sdk/analytics/macos
593
Configure alerts
Stay on top of your crashes by configuring your App Center app definition settings to send an email
when a new crash group is created. To configure these alerts:
1. Log into App Center and select your app.
2. In the left menu, navigate to Settings.
3. Click on Email Notifications.
4. Select the box next to Crashes.
85 https://docs.microsoft.com/en-us/appcenter/dashboard/bugtracker/index
594
3. Under Add bug tracker, fill in the fields for Number of crashes, Area and Default Payload, and click
Add:
●● Number of crashes is a threshold you can set for the minimum number of crashes to happen in a
crash group before a ticket is created in Azure DevOps.
●● Default payload is an optional field to fill in for use in work items. For example,
{“System.IterationPath”: "Area\Iteration 1", “System.AssignedTo”: "Fabrikam"}.
Please refer to the work item types API86 for additional information.
86 https://docs.microsoft.com/en-us/rest/api/vsts/wit/work%20item%20types
595
Advantages
●● Deep integration into Azure. Visualizations can be pinned to dashboards from multiple Azure pages
including metrics analytics, log analytics, and Application Insights.
●● Supports both metrics and logs.
●● Combine data from multiple sources including output from Metrics explorer88, Log Analytics
queries89, and maps90 and availability91 in Application Insights.
●● Option for personal or shared dashboards. Integrated with Azure role-based authentication
(RBAC)92.
●● Automatic refresh. Metrics refresh depends on time range with minimum of five minutes. Logs refresh
at one minute.
87 https://docs.microsoft.com/en-us/azure/azure-portal/azure-portal-dashboards
88 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-charts
89 https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/log-query-overview
90 https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-map
91 https://docs.microsoft.com/en-us/azure/azure-monitor/visualizations?toc=/azure/azure-monitor/toc.json
92 https://docs.microsoft.com/en-us/azure/role-based-access-control/overview
596
Limitations
●● Limited control over log visualizations with no support for data tables. Total number of data series is
limited to 10 with further data series grouped under an other bucket.
●● No custom parameters support for log charts.
●● Log charts are limited to last 30 days.
●● Log charts can only be pinned to shared dashboards.
●● No interactivity with dashboard data.
●● Limited contextual drill-down.
Advantages
●● Rich visualizations for log data.
●● Export and import views to transfer them to other resource groups and subscriptions.
●● Integrates into Log Analytic management model with workspaces and monitoring solutions.
●● Filters95 for custom parameters.
●● Interactive, supports multi-level drill-in (view that drills into another view)
93 https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-view-designer
94 https://docs.microsoft.com/en-us/azure/azure-monitor/insights/solutions
95 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/view-designer-filters
597
Limitations
●● Supports logs but not metrics.
●● No personal views. Available to all users with access to the workspace.
●● No automatic refresh.
●● Limited layout options.
●● No support for querying across multiple workspaces or Application Insights applications.
●● Queries are limited in response size to 8MB and query execution time of 110 seconds.
Advantages
●● Supports both metrics and logs.
●● Supports parameters enabling interactive reports where selecting an element in a table will dynami-
cally update associated charts and visualizations.
●● Document-like flow.
●● Option for personal or shared workbooks.
●● Easy, collaborative-friendly authoring experience.
●● Templates support public GitHub-based template gallery.
96 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-usage-workbooks
598
Limitations
●● No automatic refresh.
●● No dense layout like dashboards, which make workbooks less useful as a single pane of glass. Intend-
ed more for providing deeper insights.
Power BI
Power BI97 is particularly useful for creating business-centric dashboards and reports, as well as reports
analyzing long-term KPI trends. You can import the results of a log query98 into a Power BI dataset so
you can take advantage of its features such as combining data from different sources and sharing reports
on the web and mobile devices.
Advantages
●● Rich visualizations
●● Extensive interactivity including zoom-in and cross-filtering
●● Easy to share throughout your organization
●● Integration with other data from multiple data sources
●● Better performance with results cached in a cube
Limitations
●● Supports logs but not metrics
●● No Azure RM integration. Can't manage dashboards and models through Azure Resource Manager
97 https://powerbi.microsoft.com/documentation/powerbi-service-get-started/
98 https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-powerbi
599
●● Query results need to be imported into Power BI model to configure. Limitation on result size and
refresh
●● Limited data refresh of eight times per day for Pro licenses (currently 48 for Premium)
Grafana
Grafana99 is an open platform that excels in operational dashboards. It's particularly useful for detecting
and isolating and triaging operational incidents. You can add Grafana Azure Monitor data source
plugin100 to your Azure subscription to have it visualize your Azure metrics data.
Advantages
●● Rich visualizations
●● Rich ecosystem of data sources
●● Data interactivity including zoom in
●● Supports parameters
Limitations
●● Supports metrics but not logs.
●● No Azure integration. Can't manage dashboards and models through Azure Resource Manager.
●● Cost to support additional Grafana infrastructure or additional cost for Grafana Cloud.
99 https://grafana.com/
100 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/grafana-plugin
600
Advantages
●● Complete flexibility in UI, visualization, interactivity, and features.
●● Combine metrics and log data with other data sources.
Disadvantages
●● Significant engineering effort required.
601
Click on this, and the configuration blade for work items will open. All you need to do is fill out the
information about the VSTS system to which you want to connect, along with the project where you want
to write your work items:
101 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-overview
602
Once that information is in place, you can click on the Authorization button, where you will be redirected
to authorize access in your selected VSTS system so that work items can be written there:
Once you’ve completed the authorization process, you can set defaults for “area path” and “assigned to."
Only area path is required (if you haven’t set up specific area paths in your project, that’s ok. Just use the
name of the project, as it is the top-level area path. Click OK, and assuming you’ve entered everything
correctly, you’ll see a message stating “Validation Successful” and the blade will close. You’re now ready
to start creating work items!
603
We can see that I have several exceptions that fired when the user clicked on the Home/About tab on this
web app. If I drill into this group of exceptions, I can see the list, and then choose an individual exception:
604
Looking at the detail blade for this exception, we see that there are now two buttons available at the top
of the blade that read “New Work Item” and “View Work Items.” To create a work item, I simply click on
the first of these buttons, and it opens the new work item blade:
605
As you can see, just about everything you need in your average scenario has been filled out for you. The
default values for “area path” and "assigned to" that you designated in the initial configuration are set,
and all the detail information we have available for this exception has been added to the details field. You
can override the title, area path and assigned to fields in this blade if you wish, or you can add to the
captured details. When you’re ready to create your work item, just clicked on the “OK” button, and your
work item will be written to VSTS.
If you click the link for the work item that you want to view, it will open in VSTS:
Advanced Configuration
Some of you may have noticed that there is a switch on the configuration blade that is labeled “Advanced
Configuration.” This is additional functionality that we’ve provided to help you configure your ability to
write to VSTS in scenarios where you’ve changed or extended some of the out-of-the-box settings. A
good example of this is designating additional required fields. Currently, there is no way to handle this
additional required mapping in the standard config, but you can handle it in advanced mode.
If you click on the switch, the controls at the bottom of the blade will change to look like this:
607
You can see that you are now given a JSON-based editing box where you can specify all the settings/
mappings that you might need to handle modifications to your VSTS project. Some sample text is filled in
for you. Soon, we plan to enhance this control with intellisense as well as publish some basic guidance to
better understand the advanced configuration mode.
Next steps
We think that this is a good start to integrating work item functionality with Application Insights. But
please keep in mind that this is essentially the 1.0 version of this feature set. We have a lot of work
planned, and you will see a significant evolution in this space over the upcoming months. Just for starters,
let me outline a few of the things that we already have planned or are investigating:
●● Support for all work item types – You probably noticed that the current feature set locks the work item
type to just “bug.” Logging bugs was our primary ask for this space, so that’s where we started, but we
certainly don’t think that’s where things should end. One of the more near-term changes that you will
see is to handle all work item types for all supported processes in VSTS.
●● Links back to Application Insights – It’s great to be able to create a work item with App Insights data in
it, but what happens when you’re in your ALM system and looking at that item and want to quickly
navigate back to the source of the work item in App Insights? We plan to add links to the work items
in the very near future to make this as fast and easy as possible.
●● More flexible configuration – Currently, our standard configuration only handles scenarios where the
user has not significantly modified/extended their project in VSTS. Today, if you’ve made these kinds
of changes, you’ll need to switch to advanced configuration mode. Going forward, we want to handle
common things that people might change (e.g., making additional fields required, adding new fields)
in the standard configuration wherever possible. This requires some updates from our friends on the
VSTS team, but they are already working on some of these for us. Once they’re available, we will begin
to make the standard configuration more flexible. In the meantime (and in the future), you can always
use the advanced configuration to overcome any limitations.
●● Multiple profiles – Setting up a single configuration means that in shops where there are several ways
in which users commonly create work items, the people creating work items from Application Insights
would have to frequently override values. We plan to give users the capability to set up 1:n profiles,
with standard values specified for each so that when you want to create a work item with that profile,
you can simply choose it from a drop-down list.
●● More sources of creation for work items – We will continue to investigate (and take feedback on) other
places in Application Insights where it makes sense to create work items.
608
●● Automatic creation of work items – There are certainly scenarios we can imagine (and I’m sure you can
too) where we might want a work item to be created for us based upon criteria. This is on the radar,
but we are spending some design time with this to limit possibilities of super-noisy or runaway work
item creation. We believe that this is a powerful and convenient feature, but we want to reduce the
potential for spamming the ALM system as much as possible.
●● Support for other ALM systems – Hey, we think that VSTS is an awesome product, but we recognize
that many of our users may use some other product for their ALM, and we want to meet people
where they are. So, we are working on additional first-tier integrations of popular ALM products. We
also plan to provide a pure custom configuration choice (like advanced config for VSTS) so that end
users will be able to hook up Application Insights to virtually any ALM system.
609
Lab
Lab 17: Monitoring application performance
with Application Insights
Lab overview
Application Insights is an extensible Application Performance Management (APM) service for web
developers on multiple platforms. You can use it to monitor your live web applications. It automatically
detects performance anomalies, includes powerful analytics tools to help you diagnose issues, and helps
you continuously improve performance and usability. It works for apps on a wide variety of platforms
including .NET, Node.js and Java EE, hosted on-premises, hybrid, or any public cloud. It integrates with
your DevOps process with connection points available in a variety of development tools. It also allows
you to monitor and analyze telemetry from mobile apps through integration with Visual Studio App
Center.
In this lab, you'll learn about how you can add Application Insights to an existing web application, as well
as how to monitor the application via the Azure portal.
Objectives
After you complete this lab, you will be able to:
●● Deploy Azure App Service web apps
●● Generate and monitor Azure web app application traffic by using Application Insights
●● Investigate Azure web app performance by using Application Insights
●● Track Azure web app usage by using Application Insights
●● Create Azure web app alerts by using Application Insights
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions102
102 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
610
Suggested answer
Review Question 2
What features are provided by Azure Monitor?
Review Question 3
What query language can you use to query Azure Log Analytics?
Review Question 4
What platform integration does Azure Monitor provide to visualize your logs in real time?
611
Answers
Review Question 1
Does Azure Monitor allow you to create alerts from log queries?
■■ yes
no
What features are provided by Azure Monitor?
1. Detect and diagnose issues across applications and dependencies with Application Insights.
2. Correlate infrastructure issues with Azure Monitor for VMs and Azure Monitor for Containers.
3. Create visualizations with Azure dashboards and workbooks.
4. Support operations at scale with smart alerts and automated actions.
What query language can you use to query Azure Log Analytics?
Kusto
What platform integration does Azure Monitor provide to visualize your logs in real time?
Module overview
Module overview
The goal of DevOps is to increase the speed and quality of software services, something every IT organi-
zation needs. But how can the parties involved in DevOps accelerate delivery if they don't understand
what they deliver or know the people to whom they deliver? Despite great advancements in delivery,
quality service still begins with understanding the needs of users.
Unsatisfied customers are probably costing you a lot of money. The first step to overcoming this is to
admit that you have room for improvement. The second step is to measure customer satisfaction to find
out where you currently stand.
Engaging customers throughout your product lifecycle is a primary Agile principle. Empower each team
to interact directly with customers on the feature sets they own.
●● Continuous feedback: Build in customer feedback loops. These can take many forms:
●● Customer voice: Make it easy for customers to give feedback, add ideas, and vote on next genera-
tion features.
●● Product feedback: In-product feedback buttons are another way to solicit feedback about the
product experience or specific features.
●● Customer demos: Regularly scheduled demos that solicit feedback from your customers can help
shape next generation products and keep you on track to build applications your customers want
to consume.
●● Early adopter programs: Such programs should be developed with the idea that all teams may want to
participate as some point. Early adopters gain access to early versions of working software which they
then can provide feedback. Oftentimes, these programs work by turning select feature flags on for an
early adopter list.
614
●● Data-driven decisions: Find ways to instrument your product to obtain useful data and that can test
various hypotheses. Help drive to an experiment-friendly culture that celebrates learning.
Learning objectives
After completing this module, students will be able to:
●● Define site reliability engineering
●● Design processes to measure end-user satisfaction and analyze user feedback
●● Design processes to automate application analytics
●● Manage alerts and reduce meaningless and non-actionable alerts
●● Carry out blameless retrospectives and create a just culture
615
Benefits
●● Context-Sensitive Feedback. Users will already be using your product, so they can provide feedback
based on their actual usage or needs at the time. Priceless.
●● Always on. By implementing feedback mechanisms within the product itself, you’ve made it very easy
for users to provide input, at any point, without sending a formal survey or otherwise cluttering an
inbox in hopes of getting a hit.
●● High response rates. Since the feedback mechanism is built into your services, users can access it
when they need it. That could mean reporting a problem, bug, enhancement, or glitch or comple-
menting the team on their choice of user experience.
Weaknesses
●● Too. Much. Feedback. There are a lot of channels which users can tap into to provide feedback.
Sometimes, there are just too many to stay on top of them all. After all, it would be a shame to collect
feedback from multiple channels and not have means to review it all.
●● Not enough detail. If you are posting micro-surveys within your site, the information you get back
from respondents may not be sufficiently detailed to allow it to be actionable.
●● Always on. By implementing feedback mechanisms within the product itself, you’ve made it very easy
for users to provide input, at any point, without sending a formal survey or otherwise cluttering an
inbox in hopes of getting a hit. But sometimes that feedback may be irrelevant given new decisions.
●● Limited future follow-up. Depending on the tools being used, you may not have enough contact
information to follow-up directly with the person who submitted the feedback and delve deeper into
their responses.
Tapping into In-Product feedback is helpful throughout the Product Development Lifecycle. Despite its
weaknesses if done right, In-app feedback is an excellent way to validate existing or proposed functional-
ity, and to solicit ideas for improvements to the status quo.
Once the product is in production, use in-app tools to support users, allowing them to report issues,
frustrations, improvements, and enhancements.
If you sell a software product, asking for feedback directly inside the app is a fantastic method for
collecting product feedback.
It helps you narrow in on specific issues your customers are experiencing. However, it can also feel like
paradox of choice since you can ask ANY question. Here are a few example questions that may be helpful
to ask:
“What is {insert product feature} helping you accomplish?”
“What issues, if any, are you having with {insert product feature}?”
“What features do you think we’re missing today for {insert product feature}?”
621
There are hundreds of in-app questions you can ask. Here’s a preview of the pros and cons for in-app
surveys.
Pros Cons
Lots of flexibility - you can ask whichever question Difficult to comb through open-ended responses
you see fit, whether you’re evaluating a new and extract insights.
design, gauging how customers feel about a new
feature launch, etc.
Gives us access to the customer/user where they Low response rates.
are in the app.
Gives us context on what the user/customer is No ability to throttle NPS based on if a user has
looking at in the app right before their response. recently responded to feedback – need to be able
to suppress certain users.
Allows us to respond in-app so I can keep all my
feedback in one place.
Pros Cons
Customers feel that they’re an active part of Likely biased towards your highest-intent custom-
building your product roadmap. It provides a place ers. The people who aren't using your product are
to make customers feel their voice is heard much more likely to withhold feedback or product
suggestions.
Builds a sense of community and heightened Low volume unless customers are explicitly
loyalty when you can collaborate with the compa- prompted to suggest an idea in the board.
ny on ideas.
Provides a channel through which you can make
users feel appreciated for their contributions by
letting them know that you’re acting on their
suggestions.
1 https://dev.azure.com/mseng/Azure%20DevOps%20Roadmap/_workitems/recentlyupdated
622
2 https://feedback.uservoice.com/knowledgebase/articles/363410-vsts-azure-devops-integration
623
Pros Cons
No burden on the customer to complete a survey. Difficult to measure and quantify which makes it
In their natural environment. nearly impossible to track performance over time.
Get a true measure of what customers think about Challenging to tie social media comments back to
you as this method is entirely organic. a CRM system at scale.
Customer feedback doesn’t just come in through your site’s contact form – it’s everywhere. You only must
search the Twitter handle of any product with more than a few hundred users to see that customers love
to offer their opinion – positive and negative. It’s useful to be monitoring this and learning from it, but
casually collecting feedback on an ad-hoc basis isn’t enough.
Startups thrive on feedback as their ‘North star’ and are constantly evolving based on what their custom-
ers request, break, and complain about. Enterprises also can’t overlook the fact that customers are what
make any company tick and must struggle harder than startups to stay relevant and innovate.
So, if you’re just collecting feedback ‘as and when’ it comes in, you’re missing out on data that’s just as
important as page views or engagement. It’s like deciding not to bother setting up Google Analytics on
your homepage, or not properly configuring your CRM; in the end, you’re deciding to not benefit from
data that will have a transformative effect on your product strategy. With a dataset of feedback – whether
that’s from customer reviews, support tickets, or social media – you can dig into the words your custom-
ers are using to describe certain parts of your product and get insights into what they like, and what they
don’t like.
Here’s the kind of result you can get with this.
624
The outcome of AI analysis on Slack reviews. The categories on the left refer to different parts of Slack’s
product, and the bars represent how positively or negatively customers feel about each.
As the saying goes, “For every customer who bothers to complain, 20 other customers remain silent.”
Unless the experience is bad, customers usually don’t bother to share feedback about an experience that
didn’t meet their expectations. Instead, they decide never to do business with the service provider again.
That’s a high price to pay for lost feedback.
An excellent source of feedback is on other websites, such as online communities, blogs, local listings,
and so on. If your customers are not happy with the resolution to a negative experience, they are likely to
vent their ire on these forums. The lost customer is not the only casualty. Studies have shown that each
dissatisfied customer typically shares the unsatisfactory experience with 8 to 10 (sometimes even 20)
others. With the growing use of social media, it’s not uncommon for negative feedback to go viral and
hurt the credibility of a brand.
625
3 https://marketplace.visualstudio.com/items?itemName=ms-devlabs.vss-services-twittersentimentanalysis
626
You can read up more about release gates at Release deployment control using gates4.
4 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=vsts
627
Getting started
1. Open the desired Team Project in AzureDevOps, navigate to the Queries page under Azure Boards.
2. Click to create a new Query, add filter to return work item types of Bug of status New with priority 1
and severity Critical. Save the query in shared queries as Untriaged Critical Bugs
628
3. In the same team project, navigate to Azure Pipelines. Create an empty release pipeline with one
stage, call it staging.
4. In Staging add a PowerShell task, set it to inline mode to print “Hello World.”
5. Name the pipeline as rel.software and click save. Create a release and see the release go through
successfully.
629
How to do it
1. Edit the release pipeline, click on pre-deployment condition and from the right pane enable gates.
630
2. Click on Add to add a new gate, choose Query Work Item type Gate.
3. From the query work item drop down select the shared query ‘untriaged critical bugs’ created earlier
and set the upper threshold to 0. Set the delay between evaluation period to 1 minute. Expand the
evaluation options section and ensure that the time between evaluation of gates is set to 10 minutes
and the Minimum duration for steady results results after a successful gates evaluation is configured
to 15 minutes.
4. Save and queue a release. You'll see that the gate gets evaluated right away.
631
5. Now before the next evaluation of the gate, create a new work item of type Bug with Priority 1 and
severity critical. Wait for the next evaluation of the gate in the release pipeline to complete.
6. Close the bug as fixed, you'll see that after periodic evaluations and a stable period of 15 minutes the
release is completed successfully.
632
How it works
1. When a new release is triggered, the release goes into a state of pre-approvals. At this time, the
automated gate is evaluated at the specified interval of 10 minutes. The release will only move into
approval state if the gate passes for the duration of steady results, specified as 15 minutes in this case.
As you can can see in the logs below, the gate is failed in the 2nd validation check as one critical bug
of 1 priority is identified in new state using the configured work item query.
2. Detailed logs of the gate checks can be downloaded and inspected along with the release pipeline
logs.
633
There's more
In this tutorial we looked at the Work Item Query gate. The following other gates are supported out of
the box.
The following gates are available by default:
●● Invoke Azure function: Trigger execution of an Azure function and ensure a successful completion. For
more details, see Azure function task.
●● Query Azure monitor alerts: Observe the configured Azure monitor alert rules for active alerts. For
more details, see Azure monitor task.
●● Invoke REST API: Make a call to a REST API and continue if it returns a successful response. For more
details, see HTTP REST API task.
●● Query Work items: Ensure the number of matching work items returned from a query is within a
threshold. For more details, see Work item query task.
●● Security and compliance assessment: Assess Azure Policy compliance on resources within the scope of
a given subscription and resource group, and optionally at a specific resource level. For more details,
see Security Compliance and Assessment task.
In addition to this you can develop your own gates using the Azure DevOps API's, check out the gates
developed by the community for inspiration here5
5 https://www.visualstudiogeeks.com/DevOps/IntegratingServiceNowWithVstsReleaseManagementUsingDeploymentGate
634
ance with the system's architecture, and Augmented Search machine learning capabilities on the dynamic
data makes it faster and easier to focus on the most relevant data. Here's how that works in practice:
One of the servers went down and any attempt to reinitiate the server has failed., However, since the
process is running, the server seems to be up. In this case, end-users are complaining that an application
is not responding. This symptom could be related to many problems in a complex environment with
many servers.
Focusing on the server that is behind this problem can be difficult, as it seems to be up. But finding the
root cause of the problem requires a lengthy investigation even when you know which server is behind
this problem.
Augmented Search will display a layer which highlights critical events that occurred during the specified
period instead of going over thousands of search results. These highlights provide information regarding
the sources of the events, assisting in the triage process. At this point, DevOps enginners can understand
the impact of the problem (e.g., which servers are affected by it) and then continue the investigation to
find the root cause of these problems.
Using Augmented Search, DevOps engineers can identify a problem and the root cause in a matter of
seconds instead of examining thousands of log events or running multiple checks on the various servers.
Adding this type of visibility to log analysis, and the ability to surface critical events out of tens of thou-
sands - and often millions - of events, is essential in a fast-paced environment, in which changes are
constantly introduced.
Integrating telemetry
A key factor to automating feedback is telemetry. By inserting telemetric data into your production
application and environment, the DevOps team can automate feedback mechanisms while monitoring
applications in real-time. DevOps teams use telemetry to see and solve problems as they occur, but this
data can be useful to both technical and business users.
When properly instrumented, telemetry can also be used to see and understand in real time how custom-
ers are engaging with the application. This could be critical information for product managers, marketing
teams, and customer support. Thus, it’s important that feedback mechanisms share continuous intelli-
gence with all stakeholders.
Benefits of telemetry
The primary benefit of telemetry is the ability of an end user to monitor the state of an object or environ-
ment while physically far removed from it. Once you’ve shipped a product, you can’t be physically
present, peering over the shoulders of thousands (or millions) of users as they engage with your product
to find out what works, what’s easy, and what’s cumbersome. Thanks to telemetry, those insights can be
delivered directly into a dashboard for you to analyze and act on.
Because telemetry provides insights into how well your product is working for your end users – as they
use it – it’s an incredibly valuable tool for ongoing performance monitoring and management. Plus, you
can use the data you’ve gathered from version 1.0 to drive improvements and prioritize updates for your
release of version 2.0.
Telemetry enables you to answer questions such as:
●● Are your customers using the features you expect? How are they engaging with your product?
●● How frequently are users engaging with your app, and for what duration?
●● What settings options to users select most? Do they prefer certain display types, input modalities,
screen orientation, or other device configurations?
●● What happens when crashes occur? Are crashes happening more frequently when certain features or
functions are used? What’s the context surrounding a crash?
Obviously, the answers to these and the many other questions that can be answered with telemetry are
invaluable to the development process, enabling you to make continuous improvements and introduce
new features that, to your end users, may seem as though you’ve been reading their minds – which you
have been, thanks to telemetry.
Challenges of telemetry
Telemetry is clearly a fantastic technology, but it’s not without its challenges. The most prominent
challenge – and a commonly occurring issue – is not with telemetry itself, but with your end users and
their willingness to allow what some see as Big Brother-esque spying. In short, some users immediately
turn it off when they notice it, meaning any data generated from their use of your product won’t be
gathered or reported.
That means the experience of those users won’t be accounted for when it comes to planning your future
roadmap, fixing bugs, or addressing other issues in your app. Although this isn’t necessarily a problem by
itself, the issue is that users who tend to disallow these types of technologies can tend to fall into the
more tech-savvy portion of your user base. This can result in the dumbing-down of software. Other
users, on the other hand, take no notice to telemetry happening behind the scenes or simply ignore it if
they do.
It’s a problem without a clear solution — and it doesn’t negate the overall power of telemetry for driving
development — but one to keep in mind as you analyze your data. Therefore, when designing a strategy
for how you consider the feedback from application telemetry it's important to account for users who
don't participate in providing the telemetry.
your users’ experience and improve the stability of your application infrastructure. It helps identify the
root cause of issues quickly to proactively prevent outages and keep users satisfied.
With a DevOps approach, we are also seeing more customers broaden the scope of continuous monitor-
ing into the staging, testing and even development environments. This is possible because development
and test teams that are following a DevOps approach are striving to use production-like environments for
testing as much as possible. By running APM solutions earlier in the life cycle, development teams get
feedback in advance of how applications will eventually perform in production and can take corrective
action much earlier. In addition, operations teams that now are advising the development teams get
advance knowledge and experience to better prepare and tune the production environment, resulting in
far more stable releases into production.
Applications are more business critical than ever. They must be always up, always fast, and always
improving. Embracing a DevOps approach will allow you to reduce your cycle times to hours instead of
months, but you must keep ensuring a great user experience! Continuous monitoring of your entire
DevOps life cycle will ensure development and operations teams collaborate to optimize the user
experience every step of the way, leaving more time for your next big innovation.
When shortlisting a monitoring tool, you should seek the following advanced features:
Synthetic Monitoring: Developers, testers and operations staff all need to ensure that their internet and
intranet mobile applications and web applications are tested and operate successfully from different
points of presence around the world.
Alert Management: Developers, testers and operations staff all need to send notifications via email,
voice mail, text, mobile push notifications and Slack messages when specific situations or events occur in
development, testing or production environments, to get the right people’s attention and to manage
their response.
Deployment Automation: Developers, testers and operations staff use different tools to schedule and
deploy complex applications and configure them in development, testing and production environments.
We will discuss the best practices for these teams to collaborate effectively and efficiently and avoid
potential duplication and erroneous information.
Analytics: Developers need to be able to look for patterns in log messages to identify if there is a
problem in the code. Operations need to do root cause analysis across multiple log files to identify the
source of the problem in complex application and systems.
638
Managing alerts
When would I get a notification?
Application Insights6 automatically analyzes the performance of your web application and can warn you
about potential problems. You might be reading this because you received one of our smart detection
notifications.
This feature requires no special setup, other than configuring your app for Application Insights (on ASP.
NET7, Java8, or Node.js, and in web page code9). It is active when your app generates enough telemetry.
6 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview
7 https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net
8 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-java-get-started
9 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-javascript
10 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-proactive-performance-diagnostics#when-would-i-get-a-smart-
detection-notification
639
1. Triage. The notification shows you how many users or how many operations are affected. This can
help you assign a priority to the problem.
2. Scope. Is the problem affecting all traffic, or just some pages? Is it restricted to particular browsers or
locations? This information can be obtained from the notification.
3. Diagnose. Often, the diagnostic information in the notification will suggest the nature of the problem.
For example, if response time slows down when request rate is high, that suggests your server or
dependencies are overloaded. Otherwise, open the Performance blade in Application Insights. There,
you will find Profiler11 data. If exceptions are thrown, you can also try the snapshot debugger12.
11 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-profiler
12 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-snapshot-debugger
640
●● You can use the unsubscribe link in the smart detection email to stop receiving the email notifications.
Emails about smart detections performance anomalies are limited to one email per day per Application
Insights resource. The email will be sent only if there is at least one new issue that was detected on that
day. You won't get repeats of any message.
Triage14
●● First, does it matter? If a page is always slow to load, but only 1% of your site's users ever have to look
at it, maybe you have more important things to think about. On the other hand, if only 1% of users
open it, but it throws exceptions every time, that might be worth investigating. Use the impact
statement (affected users or % of traffic) as a general guide but be aware that it isn't the whole story.
Gather other evidence to confirm. Consider the parameters of the issue. If it's geography-dependent,
set up availability tests15 including that region: there might simply be network issues in that area.
13 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-resources-roles-access-control
14 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-proactive-performance-diagnostics#triage
15 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-monitor-web-app-availability
16 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-proactive-performance-diagnostics#diagnose-slow-page-loads
641
●● If Send Request Time is high, either the server is responding slowly, or the request is a post with a
lot of data. Look at the performance metrics17 to investigate response times.
●● Set up dependency tracking18 to see whether the slowness is due to external services or your
database.
●● If Receiving Response is predominant, your page and its dependent parts - JavaScript, CSS, images
and so on (but not asynchronously loaded data) are long. Set up an availability test19 and be sure
to set the option to load dependent parts. When you get some results, open the detail of a result,
and expand it to see the load times of different files.
●● High Client Processing time suggests scripts are running slowly. If the reason isn't obvious,
consider adding some timing code and send the times in trackMetric calls.
●● Slow loading because of big files: Load the scripts and other parts asynchronously. Use script
bundling. Break the main page into widgets that load their data separately. Don't send plain old
HTML for long tables: use a script to request the data as JSON or another compact format, then fill
the table in place. There are great frameworks to help with all this. (They also entail big scripts, of
course.)
●● Slow server dependencies: Consider the geographical locations of your components. For example,
if you're using Azure, make sure the web server and the database are in the same region. Do
queries retrieve more information than they need? Would caching or batching help?
●● Capacity issues: Look at the server metrics of response times and request counts. If response times
peak disproportionately with peaks in request counts, it's likely that your servers are stretched.
●● The response time compared to normal response time for this operation.
●● How many users are affected.
●● Average response time and 90th percentile response time for this operation on the day of the
detection and 7 days before.
●● Count of this operation requests on the day of the detection and 7 days before.
●● Correlation between degradation in this operation and degradations in related dependencies.
17 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-web-monitor-performance#metrics
18 https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-dependencies
19 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-monitor-web-app-availability
20 https://docs.microsoft.com/en-us/azure/application-insights/app-insights-proactive-performance-diagnostics#improve-slow-pages
642
●● Profiler traces to help you view where operation time is spent (the link is available if Profiler
trace examples were collected for this operation during the detection period).
●● Performance reports in Metric Explorer, where you can slice and dice time range/filters for this
operation.
●● Search for this call to view specific call properties.
●● Failure reports - If count > 1 this mean that there were failures in this operation that might
have contributed to performance degradation.
Does it mean everyone gets off the hook for making mistakes? No.
Well, maybe. It depends on what “gets off the hook” means. Let me explain.
Having a Just Culture means that you’re making effort to balance safety and accountability. It means
that by investigating mistakes in a way that focuses on the situational aspects of a failure’s mechanism
and the decision-making process of individuals proximate to the failure, an organization can come out
safer than it would normally be if it had simply punished the actors involved as a remediation.
Having a “blameless” retrospective process means that engineers whose actions have contributed to an
accident can give a detailed account of:
●● what actions they took at what time
●● what effects they observed
●● expectations they had
●● assumptions they had made
●● their understanding of timeline of events as they occurred
AND that they can give this detailed account without fear of punishment or retribution.
Why shouldn’t they be punished or reprimanded? Because an engineer who thinks they’re going to be
reprimanded are disincentivized to give the details necessary to get an understanding of the mechanism,
pathology, and operation of the failure. This lack of understanding of how the accident occurred all but
guarantees that it will repeat. If not with the original engineer, another one in the future.
If we go with “blame” as the predominant approach, then we’re implicitly accepting that deterrence is
how organizations become safer. This is founded in the belief that individuals, not situations, cause errors.
It’s also aligned with the idea there must be some fear that not doing one’s job correctly could lead to
punishment. Because the fear of punishment will motivate people to act correctly in the future. Right?
644
21 http://www.erikhollnagel.com/
645
●● Accept that there is always a discretionary space where humans can decide to make actions or not,
and that the judgement of those decisions lie in hindsight.
●● Accept that the Hindsight Bias22 will continue to cloud our assessment of past events and work hard
to eliminate it.
●● Accept that the Fundamental Attribution Error23 is also difficult to escape, so we focus on the
environment and circumstances people are working in when investigating accidents.
●● Strive to make sure that the blunt end of the organization understands how work is getting done (as
opposed to how they imagine it’s getting done, via Gantt charts and procedures) on the sharp end.
●● The sharp end is relied upon to inform the organization where the line is between appropriate and
inappropriate behavior. This isn’t something that the blunt end can come up with on its own.
Failure happens. To understand how failures occur, we first must understand our reactions to failure.
One option is to assume the single cause is incompetence and scream at engineers to make them “pay
attention!” or “be more careful!”
Another option is to take a hard look at how the accident happened, treat the engineers involved with
respect, and learn from the event.
For more information, see also:
●● Brian Harry's Blog - A good incident postmortem24
22 http://en.wikipedia.org/wiki/Hindsight
23 http://en.wikipedia.org/wiki/Fundamental_attribution_error
24 https://blogs.msdn.microsoft.com/bharry/2018/03/02/a-good-incident-postmortem/
646
Lab
Lab 18: Integration between Azure DevOps and
Microsfot Teams
Lab overview
Microsoft Teams25 is a hub for teamwork in Office 365. It allows you to manage and use all your team's
chats, meetings, files, and apps together in one place. It provides software development teams with a hub
for teams, conversations, content and tools from across Office 365 and Azure DevOps.
In this lab, you will implement integration scenarios between Azure DevOps services and Microsoft
Teams.
Note: Azure DevOps Services integration with Microsoft Teams provides a comprehensive chat and
collaborative experience across the development cycle. Teams can easily stay informed of important
activities in your Azure DevOps team projects with notifications and alerts on work items, pull requests,
code commits, as well as build and release events.
Objectives
After you complete this lab, you will be able to:
●● Integrate Microsoft Teams with Azure DevOps
●● Integrate Azure DevOps Kanban boards and Dashboards in Teams
●● Integrate Azure Pipelines with Microsoft Teams
●● Install the Azure Pipelines app in Microsoft Teams
●● Subscribe for Azure Pipelines notifications
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions26
25 https://teams.microsoft.com/start
26 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
647
Review Question 2
True or False: Azure DevOps has a feature request board.
True
False
Review Question 3
True or False: Application Insights analyses the traffic from your website against historic trends and sends
you smart detection notifications on degradation?
True
False
648
Answers
Review Question 1
What are some of the ways to measure end user satisfaction for your product?
■■ CSAT
■■ CES
STAR
■■ NPM
Review Question 2
True or False: Azure DevOps has a feature request board.
■■ True
False
Review Question 3
True or False: Application Insights analyses the traffic from your website against historic trends and sends
you smart detection notifications on degradation?
■■ True
False
Module 19 Implementing Security in DevOps
Projects
Module overview
Module overview
As many as four out of five companies leveraging a DevOps approach to software engineering do so
without integrating the necessary information security controls, underscoring the urgency with which
companies should be evaluating “Rugged” DevOps (also known as “shift left”) to build security into their
development life cycle as early as possible.
Rugged DevOps represents an evolution of DevOps in that it takes a mode of development in which
speed and agility are primary and integrates security, not just with automated tools and processes, but
also through cultural change emphasizing ongoing, flexible collaboration between release engineers and
security teams. The goal is to bridge the traditional gap between the two functions, reconciling rapid
deployment of code with the imperative for security.
For many companies, a common pitfall on the path to implementing rugged DevOps is implementing the
approach all at once rather than incrementally, underestimating the complexity of the undertaking and
producing cultural disruption in the process. Putting these plans in place is not a one-and-done process;
instead, the approach should continuously evolve to support the various scenarios and needs that
DevOps teams encounter. The building blocks for Ruggid DevOps involves understanding and implemen-
tation of the following concepts,
●● Code Analysis
●● Change Management
●● Compliance Monitoring
●● Threat Investigation
●● Vulnerability assessment & KPIs
650
Learning objectives
After completing this module, students will be able to:
●● Define an infrastructure and configuration strategy and appropriate toolset for a release pipeline and
application infrastructure
●● Implement compliance and security in your application infrastructure
651
The goal of a rugged DevOps pipeline is to allow development teams to work fast without breaking their
project by introducing unwanted security vulnerabilities.
Note: rugged DevOps is also sometimes referred to as DevSecOps. You might encounter both terms, but
each term refers to the same concept.
1 https://www.microsoft.com/en-us/security/operations/security-intelligence-report
652
Two important features of Rugged DevOps pipelines that are not found in standard DevOps pipelines are:
●● Package management and the approval process associated with it. The previous workflow diagram
details additional steps that account for how software packages are added to the pipeline, and the
approval processes that packages must go through before they are used. These steps should be
enacted early in the pipeline, so that issues can be identified sooner in the cycle.
●● Source Scanner, also an additional step for scanning the source code. This step allows for security
scanning and checking for security vulnerabilities that are not present in the application code. The
scanning occurs after the app is built, but before release and pre-release testing. Source scanning can
identify security vulnerabilities earlier in the cycle.
In the remainder of this lesson, we address these two important features of Rugged DevOps pipelines,
the problems they present, and some of the solutions for them.
Package management
Just as teams use version control as a single source of truth for source code, Rugged DevOps relies on a
package manager as the unique source of binary components. By using binary package management, a
654
development team can create a local cache of approved components and make this a trusted feed for the
Continuous Integration (CI) pipeline.
In Azure DevOps, Azure Artifacts is an integral part of the component workflow for organizing and
sharing access to your packages. Azure Artifacts allows you to:
●● Keep your artifacts organized. Share code easily by storing Apache Maven, npm, and NuGet packages
together. You can store packages using Universal Packages, eliminating the need to store binaries in
Git.
●● Protect your packages. Keep every public source package you use (including packages from npmjs
and NuGet .org) safe in your feed where only you can delete it and where it's backed by the enter-
prise-grade Azure Service Level Agreement (SLA).
●● Integrate seamless package handling into your Continuous Integration (CI)/ Continuous Development
(CD) pipeline. Easily access all your artifacts in builds and releases. Azure Artifacts integrates natively
with the Azure Pipelines CI/CD tool.
For more information about Azure Artifacts, visit the webpage What is Azure Artifacts?2
2 https://docs.microsoft.com/en-us/azure/devops/artifacts/overview?view=vsts
3 https://marketplace.visualstudio.com/items?itemName=ms.feed
655
Note: After you publish a particular version of a package to a feed, that version number is permanently
reserved. You cannot upload a newer revision package with that same version number or delete that
version and upload a new package with the same version number. The published version is immutable.
When consuming an OSS component, whether you are creating or consuming dependencies, you'll
typically want to follow these high-level steps:
1. Start with the latest, correct version to avoid any old vulnerabilities or license misuses.
2. Validate that the OSS components are the correct binaries for your version. In the release pipeline,
validate binaries to ensure that they are correct and to keep a traceable bill of materials.
3. Get notifications of component vulnerabilities immediately, correct, and redeploy the component
automatically to resolve security vulnerabilities or license misuses from reused software.
WhiteSource
The WhiteSource5 extension is available on the Azure DevOps Marketplace. Using WhiteSource, you can
integrate extensions with your CI/CD pipeline to address Rugged DevOps security-related issues. For a
team consuming external packages, the WhiteSource extension specifically addresses open-source
security, quality, and license compliance concerns. Because most breaches today target known vulnerabil-
ities in common components, robust tools are essential to securing problematic open-source compo-
nents.
4 https://marketplace.visualstudio.com/
5 https://marketplace.visualstudio.com/items?itemName=whitesource.whitesource
657
For searching online repositories such as GitHub and Maven Central, WhiteSource also offers an innova-
tive browser extension. Even before choosing a new component, a developer can review its security
vulnerabilities, quality, and license issues, and whether it fits their company’s policies.
6 https://marketplace.visualstudio.com/items?itemName=fortifyvsts.hpe-security-fortify-vsts
659
Fortify on Demand
Fortify on Demand delivers application SaaS. It automatically submits static and dynamic scan requests to
the application's SaaS platform. Static assessments are uploaded to Fortify on Demand. For dynamic
assessments, you can pre-configure a specific application URL.
Checkmarx functionality
Checkmarx functionality includes:
●● Best fix location. Checkmarx highlights the best place to fix your code to minimize the time required
to remediate the issue. A visual chart of the data flow graph indicates the ideal location in the code to
address multiple vulnerabilities within the data flow using a single line of code.
●● Quick and accurate scanning. Checkmarx helps reduce false positives, adapt the rule set to minimize
false positives, and understand the root cause for results.
●● Incremental scanning. Using Checkmarx, you can test just the code parts that have changed since last
code check in. This helps reduce scanning time by more than 80 percent. It also enables you to
incorporate the security gate within your continuous integration pipeline.
●● Seamless integration. Checkmarx works with all integrated development environments (IDEs), build
management servers, bug tracking tools, and source repositories.
●● Code portfolio protection. Checkmarx helps protect your entire code portfolio, both open source and
in-house source code. It analyzes open-source libraries, ensuring licenses are being adhered to, and
removing any open-source components that expose the application to known vulnerabilities. In
addition, Checkmarx Open Source helps provide complete code portfolio coverage under a single
unified solution with no extra installations or administration required.
●● Easy to initiate Open Source Analysis. With Checkmarx’s Open Source analysis, you don't need
additional installations or multiple management interfaces; you simply turn it on, and within minutes a
detailed report is generated with clear results and detailed mitigation instructions. Because analysis
7 https://marketplace.visualstudio.com/items?itemName=checkmarx.cxsast
660
results are designed with the developer in mind, no time is wasted trying to understand the required
action items to mitigate detected security or compliance risks.
Veracode functionality
Veracode's functionality includes the following features:
●● Integrate application security into the development tools you already use. From within Azure DevOps
and Microsoft Team Foundation Server (TFS) you can automatically scan code using the Veracode
Application Security Platform to find security vulnerabilities. With Veracode you can import any
security findings that violate your security policy as work items. Veracode also gives you the option to
stop a build if serious security issues are discovered.
●● No stopping for false alarms. Because Veracode gives you accurate results and prioritizes them based
on severity, you don't need to waste resources responding to hundreds of false positives. Microsoft
has assessed over 2 trillion lines of code in 15 languages and over 70 frameworks. In addition, this
process continues to improve with every assessment because of rapid update cycles and continuous
improvement processes. If something does get through, you can mitigate it using the easy Veracode
workflow.
●● Align your application security practices with your development practices. Do you have a large or
distributed development team? Do you have too many revision control branches? You can integrate
your Azure DevOps workflows with the Veracode Developer Sandbox, which supports multiple
development branches, feature teams, and other parallel development practices.
●● Find vulnerabilities and fix them. Veracode gives you remediation guidance with each finding and the
data path that a malicious user would use to reach the application's weak point. Veracode also
highlights the most common sources of vulnerabilities to help you prioritize remediation. In addition,
when vulnerability reports don't provide enough clarity, you can set up one-on-one developer
8 https://marketplace.visualstudio.com/items?itemName=Veracode.veracode-vsts-build-extension
661
consultations with Microsoft experts who have backgrounds in both security and software develop-
ment. Security issues that are found by Vercode and which could prevent you from releasing your
code show up automatically in your teams' list of work items and are automatically updated and
closed after you scan your fixed code.
●● Proven onboarding process allows for scanning on day one. The cloud-based Veracode Application
Security Platform is designed to get you going quickly, in minutes even. Veracode's services and
support team can make sure that you are on track to build application security into your process.
9 https://www.whitesourcesoftware.com/
10 https://www.checkmarx.com/
11 https://www.veracode.com/
12 https://www.blackducksoftware.com/
662
At the same time, CD needs to be thorough. In Azure DevOps, CD is typically managed through release
definitions (which progress the build output across environments), or via additional build definitions.
Build definitions can be scheduled (perhaps daily) or triggered with each commit. In either case, the build
definition can perform a longer static analysis scan (as illustrated in the following image). You can scan
the full code project and review any errors or warnings offline without blocking the CI flow.
13 http://aka.ms/jea
663
the Azure platform as a service (PaaS) or a similar service, then your pipeline will automatically create
new instances and then destroy them. This limits the places where attackers can hide malicious code
inside your infrastructure. Azure DevOps will encrypt the secrets in your pipeline, as a best practice
rotate the passwords just as you would with other credentials.
●● Permissions management. You can manage permissions to secure the pipeline with role-based access
control (RBAC), just as you would for your source code. This keeps you in control of who can edit the
build and release definitions that you use for production.
●● Dynamic scanning. This is the process of testing the running application with known attack patterns.
You could implement penetration testing as part of your release. You also could keep up to date on
security projects such as the Open Web Application Security Project (OWASP14) Foundation, then
adopt these projects into your processes.
●● Production monitoring. This is a key DevOps practice. The specialized services for detecting anoma-
lies related to intrusion are known as Security Information and Event Management. Azure Security
Center15 focuses on the security incidents that relate to the Azure cloud.
✔️ Note: In all cases, use Azure Resource Manager Templates or other code-based configurations. You
should also implement IaC best practices, such as only making changes in templates, to make changes
traceable and repeatable. Also, use provisioning and configuration technologies such as Desired State
Configuration (DSC), Azure Automation, and other third-party tools and products that can integrate
seamlessly with Azure.
14 https://www.owasp.org
15 https://azure.microsoft.com/en-us/services/security-center/
16 https://github.com/azsk/DevOpsKit-docs
664
security will be maintained despite changes to the state of your systems, by using a combination of
tools such as automation runbooks and schedules.
●● Alerting & monitoring. Security status visibility is important for both individual application teams and
central enterprise teams. Secure DevOps provides solutions that cater to the needs of both. Moreover,
the solution spans across all stages of DevOps, in effect bridging the security gap between the Dev
team and the Ops team through the single, integrated view it can generate.
●● Governing cloud risks. Underlying all activities in the Secure DevOps kit is a telemetry framework that
generates events such as capturing usage, adoption, and evaluation results. This enables you to make
measured improvements to security by targeting areas of high risk and maximum usage.
You can leverage and utilize the tools, scripts, templates, and best practice documentation that are
available as part of AzSK.
665
Azure Security Center is part of the Center for Internet Security (CIS) Benchmarks17 recommendations.
17 https://www.cisecurity.org/cis-benchmarks/
666
You can read more about Azure Security Center at Azure Security Center18.
The following examples are of how you can use Azure Security Center for the detect, assess, and diag-
nose stages of your incident response plan.
●● Detect. Review the first indication of an event investigation. For example, use the Azure Security
Center dashboard to review the initial verification of a high-priority security alert occurring.
●● Assess. Perform the initial assessment to obtain more information about a suspicious activity. For
example, you can obtain more information from Azure Security Center about a security alert.
18 https://azure.microsoft.com/en-us/services/security-center/
667
●● Diagnose. Conduct a technical investigation and identify containment, mitigation, and workaround
strategies. For example, you can follow the remediation steps described by Azure Security Center for a
particular security alert.
●● Use Azure Security Center recommendations to enhance security.
You can reduce the chances of a significant security event by configuring a security policy, and then
implementing the recommendations provided by Azure Security Center. A security policy defines the set
of controls that are recommended for resources within a specified subscription or resource group. In
Azure Security Center, you can define policies according to your company's security requirements.
Azure Security Center analyzes the security state of your Azure resources. When it identifies potential
security vulnerabilities, it creates recommendations based on the controls set in the security policy. The
recommendations guide you through the process of configuring the corresponding security controls. For
example, if you have workloads that don't require the Azure SQL Database Transparent Data Encryption
(TDE) policy, turn off the policy at the subscription level and enable it only on the resource groups where
SQL Database TDE is required.
You can read more about Azure security center at Azure security center19. More implementation and
scenario details are also available in the Azure security center planning and operations guide20.
Azure Policy
Azure Policy is an Azure service that you can use to create, assign, and manage policies. Policies enforce
different rules and effects over your Azure resources, which ensures that your resources stay compliant
with your standards and SLAs.
Azure Policy uses policies and initiatives to provide policy enforcement capabilities. Azure Policy evaluates
your resources by scanning for resources that do not comply with the policies you create. For example,
you might have a policy that specifies a maximum size limit for VMs in your environment. After you
implement your maximum VM size policy, whenever a VM is created or updated Azure Policy will evaluate
the VM resource to ensure that the VM complies with the size limit that you set in your policy.
Azure Policy can help to maintain the state of your resources by evaluating your existing resources and
configurations and remediating non-compliant resources automatically. It has built-in policy and initiative
definitions for you to use. The definitions are arranged in categories, such as Storage, Networking,
Compute, Security Center, and Monitoring.
Azure Policy can also integrate with Azure DevOps by applying any continuous integration (CI) and
continuous delivery (CD) pipeline policies that apply to the pre-deployment and post-deployment of your
applications.
19 https://azure.microsoft.com/en-us/services/security-center/
20 https://docs.microsoft.com/en-us/azure/security-center/security-center-planning-and-operations-guide
668
Policies
Applying a policy to your resources with Azure Policy involves the following high-level steps:
1. Policy definition. Create a policy definition.
2. Policy assignment. Assign the definition to a scope of resources.
3. Remediation. Review the policy evaluation results and address any non-compliances.
Policy definition
A policy definition specifies the resources to be evaluated and the actions to take on them. For example,
you could prevent VMs from deploying if they are exposed to a public IP address. You could also prevent
a specific hard disk from being used when deploying VMs to control costs. Policies are defined in the Java
Script Object Notation (JSON) format.
The following example defines a policy that limits where you can deploy resources:
{
"properties": {
"mode": "all",
"parameters": {
"allowedLocations": {
"type": "array",
"metadata": {
"description": "The list of locations that can be
specified when deploying resources",
"strongType": "location",
"displayName": "Allowed locations"
}
}
},
"displayName": "Allowed locations",
"description": "This policy enables you to restrict the locations
your organization can specify when deploying resources.",
"policyRule": {
"if": {
"not": {
"field": "location",
21 https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-policy-check-gate?view=vsts
22 https://azure.microsoft.com/en-us/services/azure-policy/
669
"in": "[parameters('allowedLocations')]"
}
},
"then": {
"effect": "deny"
}
}
}
}
Policy assignment
Policy definitions, whether custom or built in, need to be assigned. A policy assignment is a policy defini-
tion that has been assigned to a specific scope. Scopes can range from a management group to a
resource group. Child resources will inherit any policy assignments that have been applied to their
parents. This means that if a policy is applied to a resource group, it's also applied to all the resources
within that resource group. However, you can define subscopes for excluding resources from policy
assignments.
You can assign policies via:
●● Azure portal
●● Azure CLI
●● PowerShell
Remediation
Resources found not to comply to a deployIfNotExists or modify policy condition can be put into a
compliant state through Remediation. Remediation instructs Azure Policy to run the deployIfNotExists
effect or the tag operations of the policy on existing resources. To minimize configuration drift, you can
bring resources into compliance using automated bulk remediation instead of going through them one
at a time.
You can read more about Azure Policy on the Azure Policy23 webpage.
23 https://azure.microsoft.com/en-us/services/azure-policy/
670
Initiatives
Initiatives work alongside policies in Azure Policy. An initiative definition is a set of policy definitions to
help track your compliance state for meeting large-scale compliance goals. Even if you have a single
policy, we recommend using initiatives if you anticipate increasing your number of policies over time. The
application of an initiative definition to a specific scope is called an initiative assignment.
Initiative definitions
Initiative definitions simplify the process of managing and assigning policy definitions by grouping sets of
policies into a single item. For example, you can create an initiative named Enable Monitoring in Azure
Security Center to monitor security recommendations from Azure Security Center. Under this example
initiative, you would have the following policy definitions:
●● Monitor unencrypted SQL Database in Security Center. This policy definition monitors unencrypted
SQL databases and servers.
●● Monitor OS vulnerabilities in Security Center. This policy definition monitors servers that do not satisfy
a specified OS baseline configuration.
●● Monitor missing Endpoint Protection in Security Center. This policy definition monitors servers
without an endpoint protection agent installed.
Initiative assignments
Like a policy assignment, an initiative assignment is an initiative definition assigned to a specific scope.
Initiative assignments reduce the need to make several initiative definitions for each scope. Scopes can
range from a management group to a resource group. You can assign initiatives in the same way that you
assign policies.
You can read more about policy definition and structure at Azure Policy definition structure24.
Resource locks
Locks help you prevent accidental deletion or modification of your Azure resources. You can manage
locks from within the Azure portal. In Azure portal, locks are called Delete and Read-only respectively. To
review, add, or delete locks for a resource in Azure portal, go to the Settings section on the resource's
settings blade.
You might need to lock a subscription, resource group, or resource to prevent users from accidentally
deleting or modifying critical resources. You can set a lock level to CanNotDelete or ReadOnly:
●● CanNotDelete means that authorized users can read and modify a resource, but they cannot delete
the resource.
●● ReadOnly means that authorized users can read a resource, but they cannot modify or delete it.
You can read more about Locks on the Lock resources to prevent unexpected changes25 webpage.
Azure Blueprints
Azure Blueprints enables cloud architects to define a repeatable set of Azure resources that implement
and adhere to an organization's standards, patterns, and requirements. Azure Blueprints helps develop-
24 https://docs.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure
25 https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-lock-resources
671
ment teams build and deploy new environments rapidly with a set of built-in components that speed up
development and delivery. Furthermore, this is done while staying within organizational compliance
requirements.
Azure Blueprints provides a declarative way to orchestrate deployment for various resource templates
and artifacts, including:
●● Role assignments
●● Policy assignments
●● Azure Resource Manager templates
●● Resource groups
To implement Azure Blueprints, complete the following high-level steps:
1. Create a blueprint.
2. Assign the blueprint.
3. Track the blueprint assignments.
With Azure Blueprints, the relationship between the blueprint definition (what should be deployed) and
the blueprint assignment (what is deployed) is preserved.
The blueprints in Azure Blueprints are different from Azure Resource Manager templates. When Azure Re-
source Manager templates deploy resources, they have no active relationship with the deployed resourc-
es. (They exist in a local environment or in source control). By contrast, with Azure Blueprints, each
deployment is tied to an Azure Blueprints package. This means that the relationship with resources will be
maintained, even after deployment. This way, maintaining relationships improves deployment tracking
and auditing capabilities.
Usage scenario
Adhering to security and compliance requirements, whether government, industry, or organizational
requirements, can be difficult and time consuming. To help you to trace your deployments and audit
them for compliance, Azure Blueprints uses artifacts and tools that expedite your path to certification.
Azure Blueprints is also useful in Azure DevOps scenarios where blueprints are associated with specific
build artifacts and release pipelines, and blueprints can be tracked rigorously.
You can learn more about Azure Blueprints at Azure Blueprints26.
26 https://azure.microsoft.com/services/blueprints/
672
zation. Azure ATP is capable of detecting known malicious attacks and techniques and can help you
investigate security issues and network vulnerabilities.
27 https://portal.atp.azure.com
673
28 https://www.microsoft.com/en-ie/cloud-platform/enterprise-mobility-security-pricing
29 https://azure.microsoft.com/en-us/features/azure-advanced-threat-protection/
675
Lab
Lab 19: Implement security and compliance in
Azure DevOps Pipelines
Lab overview
In this lab, we will create a new Azure DevOps project, populate the project repository with a sample
application code, create a build pipeline. Next, we will install WhiteSource Bolt from the Azure DevOps
Marketplace to make it available as a build task, activate it, add it to the build pipeline, use it to scan the
project code for security vulnerabilities and licensing compliance issues, and finally view the resulting
report.
Objectives
After you complete this lab, you will be able to:
●● Create a Build pipeline
●● Install WhiteSource Bolt from the Azure DevOps marketplace and activate it
●● Add WhiteSource Bolt as a build task in a build pipeline
●● Run build pipeline and view WhiteSource security and compliance report
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions30
30 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
676
Review Question 2
Which term broadly defines what security means in rugged DevOps?
Access control
Application server hardening
perimeter protection
Securing the pipeline
Review Question 3
What component in Azure DevOps can you use to store, organize, and share access to packages, and
integrate those packages them with your continuous integration and continuous delivery pipeline?
Test Plans
Azure Artifacts
Boards
Pipelines
Review Question 4
Which description from the list below best describes the term software composition analysis?
Assessment of production hosting infrastructure just before deployment
Analyze build software to identify load capacity
Analyzing open-source software (OSS) to identify potential security vulnerabilities and provide
validation that the software meets a defined criterion to use in your pipeline
Analyzing open-source software after it has been deployed to production to identify security vulnera-
bilities
677
Review Question 5
From where can extensions be sourced from, to be integrated into Azure DevOps CI/CD pipelines and help
provide security composition analysis?
Azure DevOps Marketplace
www.microsoft.com
Azure Security Center
TFVC git repos
Review Question 6
Which products, from the below list, are available as extensions in Azure DevOps Marketplace, and can
provide either OSS or source code scanning as part of an Azure DevOps pipeline? Choose all that apply.
Whitesource
Checkmarx
Micro Focus Fortify
Veracode
Review Question 7
Which Azure service from the below list is a monitoring service that can provide threat protection and
security recommendations across all your services both in Azure and on-premises?<<
Azure Policy
Azure Security Center
Azure Key vault
Role-based access control
Review Question 8
Which Azure service should you use from the below list to monitor all unencrypted SQL databases in your
organization?
Azure Policy
Azure Security Center
Azure Key Vault
Azure Machine Learning
678
Review Question 9
Which facility from the below list, allows you to prevent accidental deletion of resources in Azure?
Key Vault
Azure virtual machines
Azure Blueprints
Locks
679
Answers
Review Question 1
Rugged DevOps combines which two elements? Choose two.
■■ DevOps
Cost management
Microservice Architecture
■■ Security
Hackathons
Explanation
DevOps and Security are the correct answers. All other answers are incorrect. Rugged DevOps brings
together the notions of DevOps and Security. DevOps is about working faster. Security is about emphasizing
thoroughness, which is typically done at the end of the cycle, resulting in potentially generating unplanned
work right at the end of the pipeline. Rugged DevOps is a set of practices designed integrate DevOps and
security, and to meet the goals of both more effectively.
Review Question 2
Which term broadly defines what security means in rugged DevOps?
Access control
Application server hardening
perimeter protection
■■ Securing the pipeline
Explanation
Securing the pipeline is the correct answer.
All other answers, while covering some elements of security, and while being important, do not cover what
is meant by security in Rugged DevOps.
With rugged DevOps, security is more about securing the pipeline, determining where you can add security
to the elements that plug into your build and release pipeline. For example, it's about how and where you
can add security to your automation practices, production environments, and other pipeline elements while
attempt to gain the speed of DevOps.
Rugged DevOps includes bigger questions such as:
Is my pipeline consuming third-party components, and if so, are they secure?
Are there known vulnerabilities within any of the third-party software we use?
How quickly can I detect vulnerabilities (time to detect)?
How quickly can I remediate identified vulnerabilities (time to remediate)?
680
Review Question 3
What component in Azure DevOps can you use to store, organize, and share access to packages, and
integrate those packages them with your continuous integration and continuous delivery pipeline?
Test Plans
■■ Azure Artifacts
Boards
Pipelines
Explanation
Azure Artifacts is the correct answer. Azure Artifacts is an integral part
of the component workflow, which you can use to organize and share access to
your packages. It allows you to:
Keep your artifacts organized. Share code easily by storing Apache Maven, npm, and NuGet packages
together. You can store packages using Universal Packages, eliminating the need to store binaries in Git.
Protect your packages. Keep every public source package you use, including packages from npmjs and
nuget.org, safe in your feed where only you can delete it, and where it’s backed by the enterprise-grade
Azure SLA.
Integrate seamless package handling into your CI/CD pipeline. Easily access all your artifacts in builds and
releases. Artifacts integrate natively with the Azure Pipelines CI/CD tool.
Review Question 4
Which description from the list below best describes the term software composition analysis?
Assessment of production hosting infrastructure just before deployment
Analyze build software to identify load capacity
■■ Analyzing open-source software (OSS) to identify potential security vulnerabilities and provide
validation that the software meets a defined criterion to use in your pipeline
Analyzing open-source software after it has been deployed to production to identify security vulnera-
bilities
Explanation
Analyzing open-source software (OSS) to identify potential security vulnerabilities and provide validation
that the software meets a defined criterion to use in your pipeline is the correct answer.
When consuming an OSS component, whether you're creating or consuming dependencies, you'll typically
want to follow these high-level steps:
681
Review Question 5
From where can extensions be sourced from, to be integrated into Azure DevOps CI/CD pipelines and
help provide security composition analysis?
■■ Azure DevOps Marketplace
www.microsoft.com
Azure Security Center
TFVC git repos
Explanation
Azure DevOps Marketplace is the correct answer. All other answers are incorrect.
Azure DevOps Marketplace is an important site for addressing Rugged DevOps issues. From here you can
integrate specialist security products into your Azure DevOps pipeline. Having a full suite of extensions that
allow seamless integration into Azure DevOps pipelines is invaluable.
Review Question 6
Which products, from the below list, are available as extensions in Azure DevOps Marketplace, and can
provide either OSS or source code scanning as part of an Azure DevOps pipeline? Choose all that apply.
■■ Whitesource
■■ Checkmarx
■■ Micro Focus Fortify
■■ Veracode
Explanation
All answers are correct.
All the listed products are available as extensions in Azure DevOps Marketplace and can provide either OSS
or static source code scanning as part of the Azure DevOps pipeline.
Review Question 7
Which Azure service from the below list is a monitoring service that can provide threat protection and
security recommendations across all your services both in Azure and on-premises?<<
Azure Policy
■■ Azure Security Center
Azure Key vault
Role-based access control
Explanation
Azure Security Center is the correct answer.All other answers are incorrect.
Azure Security Center is a monitoring service that provides threat protection across all your services both in
Azure, and on-premises. Security Center can:
None of the other services provide a monitoring service that can provide threat protection and security
recommendations across all your services both in Azure and on-premises.
682
Review Question 8
Which Azure service should you use from the below list to monitor all unencrypted SQL databases in your
organization?
■■ Azure Policy
Azure Security Center
Azure Key Vault
Azure Machine Learning
Explanation
Azure Policy is the correct answer. All other answers are incorrect.
Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce
different rules and effects over your resources, which ensures they stay compliant with your corporate stand-
ards and service-level agreements (SLAs). A policy definition expresses what to evaluate and what action to
take. For example, you could prevent VMs from deploying if they are exposed to a public IP address. You
also could prevent a particular hard disk from being used when deploying VMs to control costs.
Initiative definitions simplify the process of managing and assigning policy definitions by grouping a set of
policies as one single item. For example, you could create an initiative named Enable Monitoring in Azure
Security Center, with a goal to monitor all the available security recommendations in your Azure Security
Center. Under this initiative, you would have the following policy definitions:
Review Question 9
Which facility from the below list, allows you to prevent accidental deletion of resources in Azure?
Key Vault
Azure virtual machines
Azure Blueprints
■■ Locks
Explanation
Locks is the correct answer. All other answers are incorrect. Locks help you prevent accidental deletion or
modification of your Azure resources. You can manage these locks from within the Azure portal. To view,
add, or delete locks, go to the SETTINGS section of any resource's settings blade. You may need to lock a
subscription, resource group, or resource to prevent other users in your organization from accidentally
deleting or modifying critical resources. You can set the lock level to CanNotDelete or ReadOnly.
Module 20 Validating Code Bases for Compli-
ance
Module overview
Module overview
In the last module we saw how “Rugged” DevOps (or DevSecOps) has become a critical part of software
development.
Open-source software is now also commonly used and as well as all the benefits it has brought, there are
many potential downsides. These need to be managed.
Organizations will have security and compliance policies that need to be adhered to. This is best done by
integrating license and vulnerability scans as part of the build and deployment processes.
Learning objectives
After completing this module, students will be able to:
●● Describe the potential challenges with integrating open-source software
●● Inspect open-source software packages for security and license compliance
●● Manage organizational security and compliance policies
●● Integrate license and vulnerability scans into build and deployment pipelines
●● Configure build pipelines to access package security and license ratings
684
Open-source software
How software is built
Let's look at using open-source software in building software.
as many other components. The .NET Foundation aims to advocate the needs, evangelize the benefits of
the .NET platform. and promote the use of .NET open source for developers.
For more information, see the .NET foundation website1.
Open-source licenses
Open-source software and the related source code is accompanied by a license agreement. The license
describes the way the source code and the components built from it can be used and how any software
created with it should deal with it.
1 http://www.dotnetfoundation.org
686
Types of licenses
There are multiple licenses used in open-source and they are different in nature. The license spectrum is a
chart that shows licenses from the perspective of the developer and the implications of use for down-
stream requirements that are imposed on the overall solution and source code.
On the left side there are the “attribution” licenses. They are permissive in nature and allow practically
every type of use by the software that consumes it. An example is building commercially available
2 http://opensource.org/osd
687
software including the components or source code under this license. The only restriction is that the
original attribution to the authors remains included in the source code or as part of the downstream use
of the new software.
The right side of the spectrum shows the “copyleft” licenses. These licenses are considered viral in nature,
as the use of the source code and its components, and distribution of the complete software, implies that
all source code using it should follow the same license form. The viral nature is that the use of the
software covered under this license type forces you to forward the same license for all work with or on
the original software.
The middle of the spectrum shows the “downstream” or "weak copyleft" licenses. It also requires that
when the covered code is distributed, it must do so under the same license terms. Unlike the copyleft
licenses this does not extend to improvements or additions to the covered code.
License rating
Licenses can be rated by the impact that they have. When a package has a certain type of license, the use
of the package implies keeping to the requirements of the package. The impact the license has on the
downstream use of the code, components and packages can be rated as High, Medium, and Low,
depending on the copy-left, downstream or attribution nature of the license type.
For compliancy reasons, a high license rating can be considered a risk for compliancy, intellectual proper-
ty, and exclusive rights.
Package security
The use of components creates a software supply chain. The resultant product is a composition of all its
parts and components. This applies to the security level of the solution as well. Therefore, like license
types it is important to know how secure the components being used are. If one of the components used
is not secure, then the entire solution isn't either.
688
3 http://owasp.org
689
OWASP regularly publish a set of Secure Coding Practices. Their guidelines currently cover advice in the
following areas:
●● Input Validation
●● Output Encoding
●● Authentication and Password Management
●● Session Management
●● Access Control
●● Cryptographic Practices
●● Error Handling and Logging
●● Data Protection
●● Communication Security
●● System Configuration
●● Database Security
●● File Management
●● Memory Management
●● General Coding Practices
To learn about common vulnerabilities, and to see how they appear in applications, OWASP also publish-
es an intentionally vulnerable web application called The Juice Shop Tool Project4. It includes vulnerabil-
ities from all the OWASP Top 105.
In 2002, Microsoft underwent a company-wide re-education and review phase to focus on producing
secure application code. The book, Writing Secure Code by David LeBlanc, Michael Howard6, was
written by two of the people involved and provides detailed advice on how to write secure code.
For more information, you can see:
●● The OWASP foundation7
●● OWASP Secure Coding Practices Quick Reference Guide8
●● OWASP Code Review guide9
●● OWASP Top Ten10
4 https://www.owasp.org/index.php/OWASP_Juice_Shop_Project
5 https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
6 https://www.booktopia.com.au/ebooks/writing-secure-code-david-leblanc/prod2370006179962.html
7 http://owasp.org
8 https://www.owasp.org/images/0/08/OWASP_SCP_Quick_Reference_Guide_v2.pdf
9 https://www.owasp.org/images/2/2e/OWASP_Code_Review_Guide-V1_1.pdf
10 https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
690
11 https://blogs.msdn.microsoft.com/secdevblog/2016/08/17/introducing-binskim/
691
12 https://github.com/deliveron/owasp-zap-vsts-extension
692
Even with continuous security validation running against every change to help ensure new vulnerabilities
are not introduced, hackers are continuously changing their approaches, and new vulnerabilities are
being discovered. Good monitoring tools allow you to help detect, prevent, and remediate issues discov-
ered while your application is running in production. Azure provides several tools that provide detection,
prevention, and alerting using rules, such as OWASP Top 1013 and now even using machine learning to
detect anomalies and unusual behavior to help identify attackers.
Minimize security vulnerabilities by taking a holistic and layered approach to security including secure
infrastructure, application architecture, continuous validation, and monitoring. DevSecOps practices
enable your entire team to incorporate these security capabilities throughout the entire lifecycle of your
application. Establishing continuous security validation into your CI/CD pipeline can allow your applica-
tion to stay secure while you are improving the deployment frequency to meet needs of your business to
stay ahead of the competition.
Tool Type
Artifactory Artifact repository
SonarQube Static code analysis tool
WhiteSource (Bolt) Build scanning
Configure pipeline
The configuration of the scanning for license types and security vulnerability in the pipeline is done by
using appropriate build tasks in your DevOps tooling. For Azure DevOps these are build pipeline tasks.
SonarCloud
Technical debt can be classified as the measure between the codebase's current state and an optimal
state. Technical debt saps productivity by making code hard to understand, easy to break, and difficult to
validate, in turn creating unplanned work, ultimately blocking progress. Technical debt is inevitable! It
13 https://owasp.org/www-project-top-ten/
694
starts small and grows over time through rushed changes, lack of context, and lack of discipline. Organi-
zations often find that more than 50% of their capacity is sapped by technical debt. The hardest part of
fixing technical debt is knowing where to start. SonarQube is an open-source platform that is the de facto
solution for understanding and managing technical debt. In this recipe, we'll learn how to leverage
SonarQube in a build pipeline to identify technical debt.
Getting ready
SonarQube is an open platform to manage code quality. Originally famous in the Java community,
SonarQube now supports over 20 programming languages. The joint investments made by Microsoft and
SonarSource make SonarQube easier to integrate in Pipelines and better at analyzing .NET-based
applications. You can read more about the capabilities offered by SonarQube here: https://www.
sonarqube.org/. SonarSource, the company behind SonarQube offers a hosted SonarQube environment
called as SonarCloud.
CodeQL in GitHub
CodeQL is used by developers to automate security checks. CodeQL treats code like data that can be
queried. GitHub researchers and community researchers have contributed standard CodeQL queries, and
you can write your own.
A CodeQL analysis consists of three phases:
●● Creating a CodeQL database (based upon the code)
●● Run CodeQL queries against the database
●● Interpret the results
CodeQL is available as a command line interpreter but also as an extension for Visual Studio Code.
695
Security updates
A key advantage of Dependabot security updates, is that they can automatically create pull requests.
A developer can then review the suggested update, and triage what is required to incorporate it.
For more information on automatic security updates, see About GitHub Dependabot security up-
dates19
14 https://help.semmle.com/codeql/codeql-overview.html
15 https://help.semmle.com/codeql/codeql-tools.html
16 https://docs.github.com/en/free-pro-team@latest/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies
17 https://docs.github.com/en/free-pro-team@latest/github/visualizing-repository-data-with-graphs/about-the-dependency-
graph#supported-package-ecosystems
18 https://docs.github.com/en/free-pro-team@latest/github/managing-subscriptions-and-notifications-on-github/configuring-
notifications#github-dependabot-alerts-notification-options
19 https://docs.github.com/en/free-pro-team@latest/github/managing-security-vulnerabilities/about-github-dependabot-security-updates
696
Lab
Lab 20: Managing technical debt with
SonarQube and Azure DevOps
Lab overview
In the context of Azure DevOps, the term technical debt represents suboptimal means of reaching tactical
goals, which affect negatively the ability to reach strategic objectives in the area of software development
and deployment. Technical debt affects productivity by making code hard to understand, prone to
failures, time-consuming to change, and difficult to validate. Without proper oversight and management,
technical debt can accumulate over time and significantly impact the overall quality of the software and
the productivity of development teams in the longer term.
SonarQube20 is an open source platform for continuous inspection of code quality that facilitates
automatic reviews with static analysis of code to improve its quality by detecting bugs, code smells, and
security vulnerabilities.
In this lab, you will learn how to setup SonarQube on Azure and integrate it with Azure DevOps.
Objectives
After you complete this lab, you will be able to:
●● Provision SonarQube server as an Azure Container Instance21 from the SonarQube Docker image
●● Setup a SonarQube project
●● Provision an Azure DevOps Project and configure CI pipeline to integrate with SonarQube
●● Analyze SonarQube reports
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions22
20 https://www.sonarqube.org/
21 https://docs.microsoft.com/en-in/azure/container-instances/
22 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
697
Review Question 2
How can an open source library cause licensing issues if it is free to download?
Review Question 3
What is open source software?
698
Answers
What issues are often associated with the use of open source libraries?
How can an open source library cause licensing issues if it is free to download?
Each library has usage restrictions as part of the licensing. These restrictions might not be compatible with
your intended application use.
A type of software where users of code are permitted to study, change, and distribute the software. The open
source license type can limit the actions (such as sale provisions) that can be taken.