Developer Portals Red Hat Developer Hub Ebook Preview2
Developer Portals Red Hat Developer Hub Ebook Preview2
Portals
Prepare to Perform with Red Hat Developer Hub
Preview
edition
This fragment invites you into our thinking as we write a developer’s guide to developer
portals. In the first chapter, you’ll learn what an internal developer portal (IDP) is and the
problems addressed by this category of software. To practically apply those ideas, you’ll
examine Red Hat Developer Hub (RHDH), built around the open source Backstage core to
implement an enterprise distribution of a platform for creating developer portals.
Subsequent chapters present some of the tools found in an RHDH instance and their
representation in the RHDH user interface, then show you how to work with those tools to
build an application. By the end of this preview, you’ll know the basics of Backstage Software
Templates and the architecture of the example app.
Note: Because this content is neither complete nor even completely edited, you
should not expect to get that example app running on your own developer portal given
the information in this preview edition alone.
We hope this advance screening will leave you informed about why IDPs are having a moment
and intrigued enough to read the complete first edition when we release it. We’ll do our best
to deliver insights about developer portals in modern development in terms of Backstage and
RHDH, which together represent the pioneering implementation of developer portal
concepts.
The Red Hat Developer program provides many member benefits, including a no-cost
subscription for individuals and access to products like Red Hat Enterprise Linux and Red Hat
OpenShift. Learn more: https://developers.redhat.com/about
Chapter 1: A platform for portals
As a cultural initiative, DevOps has probably made software more reliable and its creation
more rapid. Its best understood practices have benefited their progenitors, publicizers, and
most successful adopters. Developer engagement with deployment concerns engenders a
systems thinking that improves application architecture, but DevOps declares for developers
an unbounded array of new concepts, terms, and concerns.
This increase in cognitive load can make it feel like the early returns from DevOps practices
such as automation and “infrastructure as code” are diminishing and progress slowing. More
than 80% of developers say they have some kind of DevOps responsibility. The more layers
between code editor and cloud deployment a developer has to master, the harder it is to
onboard new teammates and the greater the chances of their burning out in the face of
endless switching into less and less familiar contexts.
But DevOps implies a host of new concerns, systems, services, and tools.
Microservices add complexity along another axis. Many pieces of any application. What is it in
the first place? Who is responsible for any given microservice? How do you use its API
facilities? What APIs and other components does it depend on?
Complexity manifolded.
So much YAML
Build configuration, CI/CD, container, repository, deployment (Deployment), StatefulSet,
Ingress, Route…
You want this stuff—or you should. And your company definitely wants it. It has a lot of
benefits. But complexity in doing simple things will threaten to impose diminishing returns. In
computing, the answer to complexity is always abstraction.
You’re supposed to be an expert in Java, not an expert on the details of your new team’s AWS
and OpenShift environments, configurations, and prerequisites.
IDPs address several key challenges in the software development process, including:
• Visibility: Developers can’t always see what other teams are working on. IDPs make
large deployment environments more transparent.
• Software catalog: A central repository for the services and components used in a
team or organization, enabling discovery and reuse. Catalog entities index the
disparate resources that go into an application, from source code to build services to
orchestrated deployments.
Backstage history
Backstage started at Spotify in 2016. Spotify developers maintain a huge number of
microservices, components, and tools.
In 2020, Spotify released Backstage under the open source Apache license, allowing other
organizations to benefit from the platform and contribute to its improvement. Developers and
companies adopted the project in response, many of them contributing new features, abilities,
and fixes to the upstream code base. A Backstage ecosystem has grown around the core
software as well, with plug-ins and integrations developed to extend its functionality.
Backstage promotes collaboration by providing visibility into what different teams are working
on. Transparency helps prevent duplication of efforts, encourages reuse of services and
components, and fosters a sense of community among development teams.
Backstage can automate tasks such as creating new services or components, provisioning
infrastructure, and generating documentation. Automation reduces manual overhead and lets
developers focus on writing code.
Teams can customize Backstage with features and plug-ins to fit their specific requirements.
Since Backstage has a growing community of contributors and adopters, new features,
integrations, and improvements appear frequently.
Like Backstage, Red Hat Developer Hub is a platform for building developer portals. There is
always platform engineering work, with or without that formal title. Identifying patterns and
standards and refining them in line with organization goals is a key part of getting value out of
your site’s developer portal. RHDH makes it easier for you to get started with the examples in
the book by avoiding a lot of secondary decisions about integrations, configuration, and
features that you’d need to make if you started from scratch with the upstream Backstage
code.
Summary
Every team and every project evolves a tailored development environment. This collection of
tools, services and configuration is often maintained by convention and transmitted by
osmosis.
Internal developer portals help teams curate, manage and replicate these environments.
Backstage is an open source CNCF project for building developer portals, and for
encapsulating tools, services, documentation and best practices in “golden paths” to ease
onboarding and daily development. Red Hat Developer Hub is Red Hat’s enterprise IDP
platform, curating Backstage core and the ecosystem around it.
Software Templates are a Backstage feature for flexibly defining a starting point for some
arbitrary kit. A Template might represent a place to begin work on a new application built with
some standard language or framework, for example, potentially including executables,
runtimes or compilers of standard versions, boilerplate code, build configurations, and the like.
You’ll investigate Templates by adding to your running Red Hat Developer Hub portal a pair of
templates that represent a starting point for working on a web application. Then you’ll
instantiate from those Templates with the portal’s Create function to generate all the
scaffolding for your new application: source code boilerplate in a Git repo, pipelines to build it,
manifests to deploy it on Kubernetes or OpenShift.
You’ll construct the application atop a set of Templates that define and deploy foundation
components, then use Red Hat Developer Hub’s Software Catalog, API index, and other
facilities to implement the application and extend it with new features.
Application specifics
• A backend application exposing a REST API to serve the points of interest from the
database (written in TypeScript with NestJS).
• A single-page application (SPA) to provide the user interface where the map is
displayed (written in TypeScript with Angular).
Architecture
While the backend, proxy, and frontend parts of the map application could be individually built
and separately deployed, it makes sense to simplify the architecture a bit. The SPA frontend is
embedded with the API client component, served with the Quarkus http server methods. This
gives the POI Map application an architecture of three primary pieces: the frontend/API client
machinery, the NestJS POI API backend, and the database.
Software templates
Template structure
A Software Template is stored in its own source control repository, then registered in a given
RHDH instance’s software catalog.
The two Templates defining the foundations of the POI Map application use the same folder
structure:
my-template (1)
├── manifests (2)
├── skeleton (3)
└── template.yaml (4)
where:
2. manifests: The manifests subdirectory holds the YAML files, Helm charts, and
other declarations related to the deployment of the application.
3. skeleton: The skeleton subdirectory holds the basic source code structure of the
application.
First, the metadata block describes the Template and defines its owner and the Backstage
entity type created by instantiating it—in this case, an entity of type service.
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
name: a-simple-template
title: A simple template
description: This simple template is used for learning purposes.
spec:
owner: rhdeveloper-book-authors
type: service
Next, the parameters array declares the elements of the configuration form presented when
someone selects a Template and creates a new instance of the entities defined in it. The
Given the example parameters stanza shown here, each new instance of this Template will
expect you to enter or choose a number of configuration items, including the source
repository where Template resources should be created, and the RHDH user who owns the
entities created from the template. While this book is a guide to using Backstage and Red Hat
Developer Hub rather than a guide to platform engineering topics like template creation, it is
helpful to see how a template author defines the Template instance configuration form.
Notice in the Name property that parameters can have a default value. The Owner property
defines a selection pop-up menu and the items on it.
parameters:
- title: Enter some (required *) component parameters
required:
- name
properties:
name:
title: Name
type: string
description: unique name for this component
default: my-component-123
owner:
title: Owner
type: string
description: owner of this component
ui:field: EntityPicker
ui:options:
catalogFilter:
kind: [User]
- title: Choose a repository location
required:
- repoUrl
properties:
repoUrl:
title: Repository Location
type: string
ui:field: RepoUrlPicker
ui:options:
allowedHosts:
The preceding YAML snippet renders as a form like this in RHDH’s web UI. The component’s
name and owner can be specified in the first form section (Figure 5-4).
steps:
- id: templateSource
name: Generating the source code component
action: fetch:template
input:
url: ./skeleton
targetPath: ./source
values:
name: ${{ parameters.name }}
- id: publishSource
name: Publishing to the source code repository
action: publish:github
input:
sourcePath: ./source
description: Source code repository for component ${{ parameters.name }}
repoUrl: ${{ parameters.repoUrl }}
defaultBranch: main
repoVisibility: public
- id: registerComponent
name: Register component into the catalog
action: catalog:register
input:
repoContentsUrl: ${{ steps.publishSource.output.repoContentsUrl }}
catalogInfoPath: '/catalog-info.yaml'
Template registration
You need to tell the portal about a new Template in order to work with it. In an organization
with an established RHDH or Backstage instance, daily development probably won’t involve
registering new Templates as often as it will involve creating new entities from provided
Templates and monitoring and working with entities already created and indexed in the
Software Catalog. But you’ll have to register the two POI application Templates on your new
and mostly empty RHDH instance, so here is a look at how Template registration works.
There are two ways to add Templates and make them available to the portal’s Create
functions. First, RHDH inherits app-config.yaml, Backstage’s main portal configuration
file. This file declares configuration for the life of a running portal instance, including
references to Template source URLs. The portal must be restarted to change this static
configuration.
Templates can instead be added dynamically through the Register existing component
item found in the Create view. Again, you inform the portal about new Templates by
reference to their source URL.
You can use the dynamic approach just described to add the two templates from their
respective repository location:
• Go to + Create → Register existing component and copy the full HTTP URL to the
template.yaml file in the nestjs-with-postgres folder of the Git repository.
Paste the URL into the form’s (1) Select URL field, as shown in Figure 5-6.
• Clicking the ANALYZE button shows that two entities will be added into the software
catalog. One entity is the location, the HTTP URL from where the template was
loaded, and the other entity is the template itself.
Backend Template
Begin by creating an instance from the nestjs-with-postgres template. This template
scaffolds the source repository for the backend service, builds the code in it, and deploys the
result along with the database server on which the backend relies.
Go to + Create and select the Template NestJS Service with backing PostgreSQL
database by clicking the CHOOSE button in the lower-right corner of the template tile.
In the second section, you specify the who, what, and where of the application resources to be
scaffolded from the Template. The cluster ID and namespace where the running application
should be deployed, what application ID names it, and the user who owns it. You’ll see this
metadata reflected in descriptions, data about, and links between entities in the Software
Catalog after you submit the Create forms and scaffolding is complete.
Clicking the NEXT STEP button shows you a summary of all the entered form fields for a
final review (Figure 5-14).
Click CREATE to kick off the process of scaffolding application resources from the Template.
1. fetch:template: Fetches the template from its location and recursively walks
through all source folders and files (see skeleton sub folder at the origin). In each
file, the scaffolder checks if it finds variables and needs to perform parameter
replacements based on the settings which have been entered upfront in the form
wizard. More details about the templating syntax can be found in the Backstage
documentation.
2. publish:gitlab: All processed source files resulting from the templating process in
step 1 are then published into a source code repository for this component according to
the GitLab settings.
3. fetch:template: Similar to step 1, it will fetch the template contents from its location
and recurse through the manifest files (see manifest sub folder at the origin) to
potentially perform parameter replacements in the files’ contents based on the
settings that have been entered.
4. publish:gitlab: All processed manifest files resulting from the scaffolding process
in step 3 are then published into a GitOps repository for this component according to
the GitLab settings.
There’s a lot going on when you click Create and kick off the process of scaffolding from a
Template. If you imagine yourself encountering a portal where this template is already
available, the value of the developer portal comes into clearer focus. Think of bouncing
between service UIs and auth systems to manually perform all the steps automated by the
template actions, from source control in GitLab, to GitOps processes to continually build from
source with Argo CD and Tekton pipelines. You can see a depiction of the services harnessed
together by the Template scaffolding process in Figure 5-18.
RHDH first reads the template contents from GitLab and then writes the scaffolded source
code as well as the resulting GitOps related repository to GitLab. Then the portal instructs
Argo CD to create all the specified resources in the target Kubernetes cluster and namespace.
The POI map example Templates generate manifests in the form of Helm deployment charts
that declare:
• A CI pipeline in Tekton and a webhook event listener that is triggered on every commit
to the source code repository (see the helm/build folder of the related GitOps
repository).
• Everything needed to deploy the backend application, which in this simple case is a
Kubernetes Deployment, Service, and Route, along with the database (see the
helm/app folder of the related GitOps repository).
Template results
Continuing with the itemization of the steps automated by RHDH in its process of scaffolding
resources from the backend Template, take a look at the GitOps resources the scaffolder puts
in place:
• The deployed application, comprising the NestJS service and its PostgreSQL
database. The build pipeline and the webhook that triggers rebuilds and redeployments
when source code changes are committed also run on the deployment target cluster,
seen in the OpenShift web console in Figure 5-21.
You can inspect a component by clicking on its name to open the component Overview.
Note: The available tabs in the Component view depend on the configuration of the
RHDH instance.
In the next sections, you will briefly visit the different tabs from that component detail view to
figure out what you can learn about this registered catalog component that represents the
backend service of the POI map application.
The component screens in RHDH represent everything known about this application, derived
from component metadata, information from plug-in integrations with infrastructure services,
like Argo CD, Tekton, and the OpenShift (or Kubernetes) cluster where executables run.
Overview tab
The Overview tab displays a few tiles such as an About section with direct access to the
source code repository (View source) and technical documentation (View TechDocs). In
the upper-right corner of a Component Overview’s About tile you can do the following:
• Trigger the portal to re-read the Component’s catalog-info.yaml and update the
Component with the new configuration and metadata
The available information displayed in the details view and its different tabs is directly and
largely based on the component’s catalog-info.yaml file. To give a simple example, the
links tile holds custom component links which are found in the links section of this
component’s YAML definition:
Topology tab
The Topology plug-in provides a tab in the Component view showing the component’s
resources on a deployment target OpenShift or Kubernetes cluster. These include the usual
application resources in Kubernetes API terms, such as Deployment, Job, Daemonset,
Statefulset, CronJob, and Pod. When you click on the POI backend deployment, a side pane
slides in from the right to show more details. You can even retrieve logs from the container
running in the pod directly in the portal’s Component view.
Find more information about how to install, configure and use the Topology plug-in in the
documentation.
CI tab
In the CI tab, you can explore the build pipelines for the component in question. This view isn’t
limited to just one type of continuous integration, but if applicable, can conveniently display
multiple CI-related activities for the same component. In your example, and as shown in
Figure 5-27, there are two different pipelines, namely:
• a GitLab pipeline that is used for building and publishing the technical documentation
for the component.
• a Tekton pipeline used to create the container image for the backend service.
Besides that, the component’s Overview tab (already discussed; see Figure 5-23) also
contains a specific tile that displays Argo CD data, most notably the sync and health status
and the last synced timestamp.
Find more information about how to install, configure and use this plug-in in the
documentation.
Kubernetes tab
In this tab, the various pods—underpinning the catalog component—that are running in the
target Kubernetes cluster can be inspected including some workload-related details. This is
very handy, in particular, when there are any issues or errors with some of these pods.
Tekton tab
The build pipeline that has been pre-configured as part of the software template and
deployed by Argo CD is available in the Tekton tab. It provides a tabular listing with previous
pipeline runs together with the most relevant information for each run.
A click on any of the listed pipeline runs shows the separate pipeline steps/stages and their
respective outcomes (Figure 5-32). It’s also possible to retrieve the logs for each step/stage
individually by clicking on it.
API tab
Whether or not components either consume APIs from other components and/or provide
APIs themselves, including API ownership information as well as system relationships if
applicable, is all shown in the API tab. The scaffolded backend application provides an API
which can be further investigated by clicking on its name. However, since this is currently a
“hello world” REST endpoint, a more detailed discussion concerning API-related RHDH
features follows at a later stage. We will revisit this view after we have implemented and added
the actual API for the POI backend service.
Dependencies tab
Components very rarely live in isolation but instead, are often logically grouped together to
form a superordinate system. In addition to that, components can directly depend on yet
other components or resources such databases, caches, messaging infrastructure and the
like. The Dependencies tab provides insights into these aspects, thankfully even with nice
diagrams which greatly help with understanding more complex component hierarchies and/or
relationships between components and resources alike.
For the registered backend component you can see at a first glance:
As more components will be added by means of applying further templates and by properly
maintaining all these relationships in the respective catalog-info.yaml files during the
development phase, such diagrams will grow and thus become more valuable in making sense
of larger and more complex systems.
Docs tab
Having technical documentation for registered catalog components is vital. The core idea is to
live a “docs-like-code” approach. Under the covers, the default way to write documentation is
based on Markdown and the documentation related files are co-located in the same
repository as the component’s source code. What the Docs tab shows is the latest available
version of a component’s rendered HTML documentation, which has been generated and
published as part of the configured CI pipeline.
The backend component has a working TechDocs setup as configured and scaffolded during
the templating phase. Two really handy features, which are explored more closely later during
the application development phase, are:
• Modifying the underlying Markdown file: By clicking the edit icon in the upper
right, the user is redirected to the specific Markdown file in the component’s source
code repository. This is where changes can directly be made. Thanks to the configured
CI pipeline, any changes are triggering a new pipeline run. Once this is finished, a
refresh of the documentation page in the Docs tab will show the updated contents
immediately.
In the first section of this form, you define information about the GitLab location used for
publishing the resulting source code and GitOps repositories—leave the defaults as is.
Figure 5-37: Quarkus Service with Angular Template configuration form, first section.
In the second section, you specify important settings, namely, cluster ID, namespace,
application ID and owner for our new software component. Based on this information, it’s
clear into which Kubernetes cluster and namespace this component is eventually going to be
deployed with the entered application ID as its name. The selected user defines the
ownership for this software component. It’s important to make sure that you use the same
namespace and cluster ID as for the backend template you applied earlier (see Template form
wizard).
In the third step, it’s defined which image registry to use for pushing the container image to.
You can also choose a custom tag that gets used during the CI process to tag the container
image with.
Figure 5-39: Quarkus Service with Angular Template configuration form, third
section.
Clicking NEXT STEP shows a summary of all the entered form fields for a final review.
• In GitLab you have two new repositories (Figure 5-42), demo01-poi-map and
demo01-poi-map-gitops.
• In Kubernetes, there is the deployed Quarkus proxy application, which also serves the
Angular single-page application frontend. Additionally, the build pipeline and the
webhook-related resources have been set up.
Figure 5-45: POI proxy service and frontend entities in the Software Catalog.
Note: Keep in mind that the actual possibilities, the available tabs, and the tiles
anywhere in that detail view primarily depend on the configuration of the RHDH
instance, the installed plugins as well as any component view customizations which may
or may not be in place for your environment.
Since you have already visited the different tabs for a component’s detail view earlier while
exploring the demo01-poi-backend component of the POI map application (see POI
backend in the Software Catalog), this should all look very familiar to you at that stage—at
least for the demo01-poi-map-service component. Nevertheless, it’s worth noting some
differences by taking a closer look at the demo01-poi-map-frontend component. By
clicking on its name in the catalog, you end up in the component overview, as shown in Figure
5-46.
• Topology and Kubernetes tabs: Both are telling you there are no dedicated
resources found for this specific component. That’s fine because the Angular SPA is
hosted and served from within the Quarkus proxy service, meaning it’s the same
resources for Deployment, Pod, etc., as for the demo01-poi-map-service
component.
• Tekton tab: This informs you that no pipeline runs are found, which is also kind of
expected, because the Angular SPA is built together with the Quarkus proxy service
(see demo01-poi-map-service) in the same Tekton-based build pipeline
Just to clarify, of course it would work to treat both these components completely separately.
However, we’ve chosen this architecture for the proxy service and the frontend application
paired with the monorepo approach in order to show RHDH’s flexibility for working with
templates and software catalog components in different ways.
Summary
You’ve explored the basics of how Software Templates are defined, then used two example
Templates to deploy the foundation atop which you’ll implement the POI Map application. In
your running portal, the Software Catalog lists three new entities indexing all of the resources
of your development project, from source code to the executing components of the
application.
In the next chapter, you’ll add code to the scaffolding provided by the Templates to create a
functional POI map.
Application development
You’ve used a couple of Software Templates to put everything in place to get a new
application going. Now you can implement your application features with actual code. While
you’ve peered inside a Template and registered it in your portal, often your usual work would
begin nearer to this point, making changes to existing entities indexed in your Software
Catalog.
After cloning this repository, open the NestJS project in the IDE of your choice. At that point,
you could start to develop the code for the application in question. To speed things up, we
provide a turn-key ready implementation for this backend service in a ZIP archive.
After downloading the ZIP archive to your development machine, perform the following
commands at a terminal prompt in your IDE or OS:
2. Copy all the contents from inside the root folder of the unzipped archive:
cp -r archive/* demo1-poi-backend
3. Run git status. You should see a changeset like the following:
On branch app-dev
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
new file: .env
modified: Dockerfile
new file: assets/nationalparks.json
modified: openapi.yaml
modified: package-lock.json
modified: package.json
new file: prisma/migrations/init/migration.sql
new file: prisma/migrations/migration_lock.toml
new file: prisma/schema.prisma
deleted: src/app.controller.spec.ts
deleted: src/app.controller.ts
modified: src/app.module.ts
4. Commit these application code changes and push the new app-dev branch to
the GitLab repository: git commit -m ‘implement poi backend’
5. In the GitLab web UI, create a new merge request for your app-dev branch.
Back in GitLab, you can merge this merge request (Figure 6-4).
Check CI/CD
The merged code will trigger the configured build pipeline via a webhook. After a minute or
so, the code changes are available in the freshly built container image for your backend
service.
initdb:
scripts:
db_init.sql: |
-- CreateTable
CREATE TABLE "Poi" (
"id" BIGSERIAL NOT NULL,
"name" TEXT NOT NULL,
"description" TEXT,
"latitude" DOUBLE PRECISION NOT NULL,
"longitude" DOUBLE PRECISION NOT NULL,
"type" TEXT NOT NULL,
Figure 6-6: Edit Helm chart’s values.yaml in GitLab to add SQL init script.
This change in the GitOps repository will eventually trigger another build pipeline run and
consequently also lead to a redeployment of the Postgres database instance by Argo CD.
This indicates that the backend service should be up and running fine. After closing the logs,
you can click on the Routes link, which opens a new browser tab. Because the backend service
isn’t serving anything on the / path, the error message shown in the new tab is expected.
By appending /ws/info at the end of the current URL, you should see the following
response:
{"id":"poi-backend","displayName":"National Parks","coordinates":
{"lat":0,"lng":0},"zoom":3}
In the API view, there is a Links tile as part of its Overview tab that has two entries:
• Swagger UI: A direct link to the swagger UI as served by the running backend service.
• API Spec: A direct link to this API’s underlying openapi.yaml, which resides in the
component’s source code repository.
Clicking the API Spec link opens the GitLab repository showing the openapi.yaml file.
Figure 6-12: GitLab Swagger UI for the POI backend OpenAPI spec.
Figure 6-13: GitLab raw file view for openapi.yaml definition of POI backend.
It’s immediately visible that what you are reading still reflects the documentation as originally
scaffolded during the templating phase of this component. You can fix that right away and
The POI backend component represents a web service written in TypeScript with NestJS that
serves points of interest data records from a PostgreSQL database.
Copy and paste this into GitLab’s editor for the docs/index.md file as shown in Figure 6-15
and confirm the change by clicking the Commit changes button.
Once the pipeline successfully finished, switch back to the browser tab showing RHDH
component view. Reload the page in order to see the rendered HTML view with the new
documentation based on the update you just committed.
If you plan to create multiple files, introduce a folder hierarchy for your documentation, or add
images and illustrations, it’s of course recommended that you write the documentation locally
in your Markdown editor or IDE of choice. This allows you to create a separate branch and also
rely on merge requests including reviews for everything you wrote, similar to the workflow that
we used in the Write the code section for implementing the backend component.
Another nice TechDocs feature in RHDH is the possibility to raise documentation related
issues while reading, right from the respective docs page in question. It’s sufficient to mark or
highlight specific words or sentences on the page and wait a bit for a tooltip to appear,
labeled Open new GitLab issue (Figure 6-18).
Figure 6-19: GitLab create new Tech Docs issue for the POI backend
component.
Once you are done, click Create issue at the bottom of the page.
Switching to the RHDH component view for the demo01-poi-backend component and
selecting the Issues tab, we can of course see this raised documentation-related issue
accordingly.
Figure 6-21: POI backend component Issues tab with open TechDocs issue.
In summary, Red Hat Developer Hub’s TechDocs feature takes away a lot of the usual pain and
hassle related to technical documentation. It is supposed to just work, provided it has been
configured once upfront for RHDH and is properly integrated into the respective Software
Templates.
• Component description: NestJS backend service for the POI Map application.
• API description: API provided by the POI Map application’s NestJS backend service
to load and store POI records from the database.
• Resource description: Database storing the POI records for the POI Map
application’s NestJS backend service.
This opens the catalog-info.yaml file in GitLab’s edit mode, where you can directly
modify the three descriptions in the YAML definition as shown below:
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: demo01-poi-backend
description: NestJS backend service for the POI map application
annotations:
argocd/app-name: demo01-poi-backend-dev
backstage.io/kubernetes-id: demo01-poi-backend
backstage.io/kubernetes-namespace: demo01
backstage.io/techdocs-ref: dir:.
gitlab.com/project-slug: development/demo01-poi-backend
janus-idp.io/tekton-enabled: 'true'
tags:
- nodejs
- nestjs
- book
- example
links:
- url: https://console-openshift-console.apps.cluster-
nxfzm.sandbox2909.opentlc.com/dev-pipelines/ns/demo01/
title: Pipelines
icon: web
Confirm these metadata changes by clicking Commit changes at the bottom (Figure 6-23).
If you now go back to RHDH into the demo01-poi-backend component’s detail view, select
the Overview tab, and take a look at the About tile, it might still show the previous
component description. The reason is that RHDH, based on configuration settings, will
periodically refresh such component changes by syncing the respective files from the GitLab
repository into the Software Catalog. In case you are impatient, you can click the Sync icon in
the upper-right corner of the About tile to actively schedule a refresh (Figure 6-24).
Similar to these basic changes, more complex modifications can be performed whenever
needed, such that the underlying metadata always reflects the current state based on your
most recent engineering activities.
This concludes your RHDH journey for building the NestJS backend service of the POI map
application based on the template you applied earlier (see Backend Templates).
Next up, you will shift focus towards the proxy and frontend code base that has already been
scaffolded (see Proxy and Frontend Templates) into a monorepo using the quarkus-with-
angular template.
Note: During the time it takes to launch your browser-based VS Code instance, you
might be asked for a re-authentication along the way, potentially more than once
depending on how your RHDH environment has been configured in that regard.
What’s really convenient when taking this route is that you eventually end up in your dedicated
and fully-fledged VS Code instance with the proper Git repository already checked out. This
means you can start right away with coding the application in question—all without going
through any hassle of having to set up everything locally.
In OpenShift Dev Spaces, your web VS Code instance, open a terminal session by selecting
Terminal > New Terminal from the burger menu in the upper-left corner of the UI (see
Figure 6-34).
Click into the terminal window at the bottom right of the screen and proceed with the
following steps in order to add the pre-created code necessary for the proxy and frontend
applications to work together:
1. Switch to the parent directory of the current project root folder by typing: cd ..
4. Copy all the contents from the app-sources folder into the application’s source code
root folder and overwrite all existing files by running: cp -rf app-sources/*
demo01-poi-map/
5. Create a new branch in VS Code by switching to the Source Control view and then
clicking the 3 dots menu (...) in the upper-right of the left view pane. Select Branch --
> Create Branch and use app-dev as the branch’s name.
9. Merge this new app-dev branch into the main branch right away.
Similarly to the quick edit you made to the backend component (demo01-poi-backend)
earlier, you can perform small-ish updates to the documentation by changing the Markdown
file right in GitLab’s file edit mode. For bigger documentation enhancements, you might want
to work in a clone of the demo01-poi-map-service repo and in the editor or IDE you
prefer.
Summary
This pilot episode is a cliffhanger. You have seen a developer portal streamline your path to
new application scaffolding and your map app’s basic implementation. You’ve already got
running code.
Usually at about this point in a story we have to introduce a little conflict to hold the reader’s—
I mean hero’s—interest. “Into a little life every rain must surely fall.” (That’s called
foreshadowing. We learned that from Edgar Allen Poe. And Randy Newman.)
Join us next time—when Developer Portals: Preparing to Perform with Red Hat Developer Hub
is published in full.
Additional resources
Visit the Red Hat Developer Hub product page to learn more:
https://developers.redhat.com/rhdh
Hans-Peter Grahsl is a Red Hat Developer Advocate and open source enthusiast who loves
helping developer communities improve their productivity. He is passionate about event-
driven architectures, distributed stream processing, and data engineering. Grahsl lives with his
family in Graz, Austria, and travels often to speak at international developer conferences.
Ryan Jarvinen is a Red Hat Principal Developer Advocate and noted speaker living and
working in Sacramento, California. Jarvinen enjoys learning about best practices for
developer experience and usability in the Cloud Native ecosystem, and helping teams develop
strategies for maximizing collaboration using open source technologies.