0% found this document useful (0 votes)
123 views76 pages

CODE 5-2022 Web NEW

Code magazine 2022 5th episode

Uploaded by

Umair Ahmadh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
123 views76 pages

CODE 5-2022 Web NEW

Code magazine 2022 5th episode

Uploaded by

Umair Ahmadh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Power BI, FIDO, Postgres, ADO.

NET, C#

SEP
OCT
2022
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

EVENT
SOURCING
© AdobeStock/klyaksun + onsightdesign

Benchmarking Exploring The World


.NET Applications Data Science According
and Power BI to YARP
TABLE OF CONTENTS

Features
8 FIDO2 and WebAuthn 62  utting Data Science into
P
If your system relies on username and passwords for security,
you may be in trouble. Sahil shows you that true security
Power BI
can be simpler than you think. Power BI seems to have everything you need for data analytics.
Sahil Malik Helen shows you how to get the most out of it and how to make
some cool charts, too.
Helen Wall
14 YARP: I Did It Again
Yet Another Reverse Proxy (YARP) might sound like something you’d rather
not do, but Shawn shows you how it can improve performance if you’ve got
70  etting Started with
G
microservices, load balancing issues, URL writing, or tight security issues.
Shawn Wildermuth
Cloud Native Buildpacks
Take advantage of modern container standards using cloud-native
buildpacks. Peter shows you how.
20 S implifying ADO.NET Code Peter Mbanugo

in .NET 6: Part 2
The second installment in Paul’s new series refactors the code you built in

Columns
Part 1 to make it more reusable. You’ll also learn to get data from a view,
handle multiple result sets, get a scalar value, and call stored procedures.
Paul D. Sheriff

34  ustomized Object-Oriented and


C 74 CODA: On Consulting
Client-Server Scripting in C# and Organizations
You need full control of how your functionality is implemented. Vassili tells There’s more to the role of a consultant than showing up and
you how to use classes and objects for great control, and how to implement pounding out line after line of code. John explains the nuances.
them in C# in this article about object-oriented and client-server scripting. John V. Petersen
Vassili Kaplan

40  enchmarking .NET 6 Applications


B
Using BenchmarkDotNet: Departments
A Deep Dive
You already know you need to identify and maintain standards before you ship 6 Editorial
your app. Joydip looks at how to set benchmarks and why they’re essential.
Joydip Kanjilal
16 Advertisers Index
50 E vent Sourcing and CQRS
73 Code Compilers
with Marten
After examining persisted system states in a relational database,
Jeremy discovers that he needs to use the Marten library to provide
robust support for Event Sourcing.
Jeremy Miller

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $50.99 USD. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Back issues are available. For subscription information, send e-mail to subscriptions@codemag.com or contact
Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.

4 Table of Contents codemag.com


EDITORIAL

Recovering Creativity
It’s no secret that the last two and a half years have been tough. We’ve gone from locked down in
our homes to out and about freely once vaccines were developed, then back on lock down because
of omicron and its variants. Rinse, lather, repeat…. This constant state of worry and concern has

taken a real toll on our collective psyches. One We visited the Leake Street Graffiti Tunnel. This is this conference is back on track and my keynote
part of our psyche that has suffered is creativ- by far one of my favorite parts of London, as it’s slots have been re-secured. Uh oh! Guess I’d bet-
ity. In recent discussions with Melanie, I let it basically a temple to creativity. Every time I go ter finish that deck. SET FREAK OUT TO ON!
be known that my creative well was EMPTY. Zero, to London, I make sure to visit this cathedral of
zilch, nada. The creative energy bank was penni- paint to see what new and interesting graffiti has This endeavor had me a bit concerned. I was
less. These discussions centered around this very been added. Figure 2 shows a small portion of much more optimistic at the beginning of 2020
editorial and my difficulty finding a thread. Every some of the awesome spray paint art. This semi- than I am currently. I was unsure if I could get
other month I put together this editorial, and seedy location is a Zen zone for me. back into that optimistic frame of mind to build
every time, I try to have something to say. This and deliver what a keynote should be: an inspir-
time, I had zero. Until this morning. Recovering Creativity Step 2: S ing story to get folks jazzed for the conference
tep Away from the Tech and Get Outside they’re attending. So I was pleasantly surprised
YES! The concept for this editorial sprang forth After a week in London, we flew home and set when I opened my slide deck to be greeted by a
into my mind while doing my morning pages at out—a mere 36 hours later—to spend the second cool collection of ideas and stories that provide
4:30 a.m. As evidence, I present the note card week enjoying family time in the Colorado Rock- the backbone of this keynote. The optimism was
where I jotted down rough ideas for this editorial, ies. This part of the trip had its own form of cre- there, the inspiration was there, and I immedi-
the central theme being how to recover creativity ativity recovery in the forms of hiking, spelunk- ately found that the certainty was there. I had
in an environment where we’re still bombarded ing, and white-water rafting. Yes, I went white- the creativity I needed to complete this keynote.
with negative data on what feels like a daily basis. water rafting. Getting outdoors does wonders for
my psyche. Recovering Creativity Step 4:
Recovering Creativity Step 1: Embrace What Works
Take Needed Down Time Recovering Creativity Step 3: My final step (at least for the purposes of this
The first step I took on this road to recovery was Revisit Old Creativity editorial) in helping on the road to recovering
some sorely needed down time. All work and no The next step was to revisit a project put on hold my creativity is to embrace what works. In this
play make Rodney a dull boy. I took two weeks off right at the start of 2020. This project was to de- case, it’s my morning pages. I have an admission,
from work and spent time with my family. The first velop a keynote deck for a conference in Canada. though. I did morning pages faithfully every day
week I went to London with my son Isaiah. He’d In late 2019, I was invited to give a keynote and for over a year and it gave me untold bursts of
never been to the UK, so this was a real treat as I had made good progress on my slide deck until creativity and honestly helped me get through
got to experience it through his fresh set of eyes. COVID ground everything to a halt. It’s 2022 and the first year of COVID lock down. As things typi-
cally go, I stopped doing the pages faithfully and
eventually stopped altogether. Well, three days
ago, I made the decision to return to the pages
and it took three days of fighting the NEGS (my
word for negativity, or the gremlins that feed
your doubt) to keep moving forward through the
pages, and today, the pages delivered.

Recovering Creativity Step 5:


Keep Optimistic!
Times have been tough and they may continue to
be rough. It’s okay. It can be difficult to keep your
creative well full and, in all reality, sometimes it
just gets empty. Remember: You’re not alone, even
if you sometimes find yourself creatively drained.

We all fall down sometimes. Stay optimistic. We’ll


get through this together.

 Rod Paddock


Figure 1: Notes to myself Figure 2: Great art is inspiring.

6 Editorial codemag.com
CUSTOM SOFTWARE DEVELOPMENT
STAFFING TRAINING/MENTORING SECURITY

MORE THAN JUST


A MAGAZINE!
Does your development team lack skills or time to complete all your business-critical software projects?
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/code
832-717-4445 ext. 9 • info@codemag.com

codemag.com
ONLINE QUICK ID 2209021

FIDO2 and WebAuthn


Authentication has been an essential part of applications for some time now because applications need to know some
information about the user who’s using the application. For the longest time, the solution to this has been username and
passwords. Username passwords are popular because they’re convenient to implement. But they aren’t secure. There are many

issues with passwords. First, there’s the problem of transmit- be transported to neanderthal times immediately. I can’t
ting this password securely. If you send the password over imagine how a common non-technology-friendly person
the wire, a man-in-the-middle could sniff it. That pretty much deals with all this.
necessitated SSL over such communication or the equivalent
of creating a hash of the password that’s sent over the wire Second, MFAs and one-time passwords are both cumbersome
instead of the actual password. But even those techniques and expensive for the service provider. All those SMS messag-
didn’t solve the problem of the server securing the password, es and push notifications cost money. This creates a barrier to
or a secure hash of the password. Or, for that matter, keeping entry for someone trying to get a service off the ground. Then
you safe from replay attacks. Increasingly complex versions there’s the question of which authenticator app to trust and
Sahil Malik of this protocol were created, to the point where you could, whether that app be trusted. Is SMS good enough?
www.winsmarts.com with some degree of confidence, say that you were safe from
@sahilmalik man-in-the-middle attacks or replay attacks. Third, there’s the issue of phishing. As great as MFA is,
someone can set up a service that looks identical to a legit
Sahil Malik is a Microsoft service, and unless you have very keen eyes watching every
Users created a simple, easy to remember password, and
MVP, INETA speaker,
brute force techniques guessed those passwords. So we step, you may fall for it. Unfortunately, even the best of us
a .NET author, consultant,
came up with complex requirements for passwords, such is tired and stressed at times, and that’s when you fall for
and trainer.
as your password must contain an upper case, lower case, this. In fact, the unscrupulous service that pretends to be a
Sahil loves interacting with special character, and minimum length—and yet people still legit service could simply forward your requests to the legit
fellow geeks in real time. picked poor passwords. When they didn’t pick poor pass- service after authentication while stealing your session. So
His talks and trainings are words that were easy to remember, they would reuse pass- you may think everything is hunky dory but your session has
full of humor and practical words across different systems. Or they would use password effectively been stolen.
nuggets. managers to store their passwords, until the password man-
ager itself got compromised. Finally, there is authentication fatigue. Hey, I just want to
His areas of expertise are login and use a system. Zero trust dictates that you assume
cross-platform Mobile app But even then, you’re not safe from passwords being leaked. a breach, so it’s common for services to over-authenticate.
development, Microsoft Worse, leaked passwords are not detected—you don’t know This creates authentication fatigue, and an already fatigued
anything, and security if your password has been leaked until the leak is discov- user could blindly approve an MFA request, especially if it’s
and identity. cleverly disguised. It only takes one mistake for a hacker to
ered. And these leaks could occur on a poorly implemented
service. This means, no matter what you do, you’re still in- get in the house, then they can do plenty of damage, poten-
secure. tially remaining undetected for a long time.

Don’t Despair What am I Trying to Solve?


There are solutions. There are concepts like MFA or one- I’m not trying to secure passwords or make a better MFA so-
time passwords that can be used in addition to your usual lution here. The fundamental problem I wish to solve here is
password. This is what you’ve experienced when you enter how an application can securely trust a user’s identity, such
a credential, but in addition, you have to enter a code sent that the identity is not cumbersome to manage, is secure,
to you via SMS or from an authenticator app on your phone. convenient, and not stealable.

MFA and one-time passwords are great. In fact, I’d go to the Let’s refine this problem further. The problem I’m really
extent of saying that if there’s a service you‘re using that trying to solve is that a user goes through a registration
uses only username password, just assume it’s insecure, and process, typically the first time you encounter the user. The
don’t use it for anything critical. Additionally, pair it with next time the user shows up on the application, you want to
common-sense practices like own your domain name, and a make sure it’s the same person behind the user ID. You want
separate email address from your normal use email address to do so with 100% confidence, and you want to do so with
for account recovery. Secret questions and answers that relative ease for users and the application.
aren’t easy to guess, and answers that don’t make sense
to anyone. If you had a clean slate to architect this with the technology
available today, how would you do it?
As great as MFA and one-time passwords are, they’re still
not a perfect picture. There are a few big issues with this Imagine if, during registration, the user generates a key
approach. pair. This is a typical certificate. There’s a private key, and
there’s a public key. With the private key, you can sign stuff,
First, they are cumbersome to manage for the end user. I you can encrypt stuff, but you never share that private key.
work with this stuff on a daily basis, and I find it frustrating You can keep the private key in your private possession for-
to manage 100s of accounts, multiple authenticator apps, ever. But the public key is public information. With the pub-
and I worry that if I ever broke my phone accidentally, I’d lic key, you can only decrypt or verify the signature.

8 FIDO2 and WebAuthn codemag.com


During registration, you generate a key pair that’s unique Protocols
for the service, and the public portion of that key pair is The overall concept sounds great, but if various services
shared with the service. The server then stores it securely don’t speak a common language, this concept will never
and connects it with this particular user (you). gain foothold. This is why this concept has been solidified as
protocols. Like anything else in identity, protocols around
Next time you wish to authenticate, the server generates a this concept have been evolving.
challenge that’s just a random string. This challenge is com-
municated over to the user securely over HTTPS. This chal- The word “FIDO” comes from the FIDO alliance, which is the
lenge is encrypted or signed by you using your private key organization pushing for this standard. You can check them
that’s unique to this service. This encrypted or signed string out at https://fidoalliance.org. If you check out their web-
is now sent back to the service. The service can now validate site, you’ll see them describe specs on UX guidelines around
the signature via the public key associated with the user. strong authentication, but more interestingly, they talk of
specific specs such as FIDO universal second factor (FIDO
This sounds like something that could work, but a few inter- U2F), FIDO universal authentication framework (UAF), and
esting things happened here. At no point was the private key FIDO2, which includes W3C’s Web authentication (WebAuthn)
communicated anywhere except on your device. As you’ll see spec, and FIDO client-to-authenticator protocol (CTAP).
later in this article, there are plenty of hardware devices that
allow for the storage of this private key securely. All right, that was a lot of acronyms I just threw at you. Let’s
break it down in Figure 2. FIDO2 is the umbrella term of
There’s also no need for a cumbersome MFA prompt or the what I’m concerned with here. When the user needs to reg-
maintenance of it. There’s also no risk of SMS spoofing or ister or authenticate, they interact with an external authen-
your phone number rolling over to the next subscriber. ticator or a platform authenticator. An external authentica-
tor could be a USB key, such as a YubiKey. It looks just like
There’s no additional cost for the service to send MFA USB flash storage but may have additional biometric protec-
prompts. The service just needs to remember a mapping of tion on it. Or the user could use a platform authenticator,
the public key with the user, so it’s no worse than remem- such as FaceID, TouchID, Windows Hello, etc. When the user
bering a password. This can be paired with existing MFA interacts with a relying party (the service you are trying to
techniques, if you choose to do so. access), it uses a protocol called WebAuthn.

Sounds like we’re on to something interesting here. Let’s


dig further. Registration Process
When a user first lands on a site, they create an account.
This is called the registration process. Here’s how it would
Hardware Support work if you were to do this under the FIDO2 protocol.
I’ve boiled the problem down to the much simpler problem
of keeping your private keys secure. The good news is that The user lands on the site, and says, “hey, I want to register
there exists a lot of interesting hardware to help you do so. a key.” The server then generates a challenge, a random
These look like USB keys, or even NFC keys, that securely string, and passes it over TLS to the user along with a bunch
store your private keys. An example of this key is a YubiKey, of other information. A critical part of this information is
as shown in Figure 1. the relying party ID. The relying party ID must match the
TLD or top-level domain of the site the user is on. This is
These private keys require some interaction from the user to verified with the SSL cert being used by the server. Once the
extract the private key briefly when it is needed. Addition- client has verified the identity of the server, the client then
ally, you now see things like trusted platform module (TPM)
requirements baked into Windows 11 and MacOS/iOS, sup-
porting things like TouchID on Macs and FaceID on phones,
which, paired with secure enclaves, give you a pretty neat
solution around storing these keys securely in iCloud.

The Apple ecosystem moves faster because they have full con-
trol on the end-to-end story, and it also helps that they make a
phone. I’m particularly excited about the possibility of roaming
these keys using iCloud. What this means for the end user is
that they just use their devices as they normally do. They don’t
need to carry a separate dongle or device or risk losing it. And
yet they gain the convenience of never having to bother with a Figure 1: YubiKeys
password while remaining secure. This is the holy grail of securi-
ty—security and convenience, so users won’t try to work around
inconveniences. All this is pretty new at the time of writing this
article; the support for this technology was introduced in iOS 15
and future versions of OSs will improve this and make it more ac-
cessible. I’m quite excited about where this is headed.

Hardware aside, you probably need a common understand-


ing of protocols for this standard to be implemented, right?
Let’s talk about that next. Figure 2: FIDO2 and its moving parts

codemag.com FIDO2 and WebAuthn 9


generates a public-private key pair. The private key is never Why are there multiple credential IDs? It’s because you want
sent over the wire. But the public key, along with the signed to support more than one key per user, just in case one key
challenge, is sent back to the server. Along with this, it also gets lost. Or perhaps one key lends you a greater level of
sends a credential ID generated by the security key. access than the other.

The server then verifies the signed challenge with the public The user now receives the challenge. At this point, the
key. If it passes signature verification, the server then stores user’s computer verifies the server identity, and uses the
the credential ID and the public key, and sets the counter credential ID to find the appropriate key. It then increments
to zero. Every time an authentication is performed, this the counter so it stays in synch with the server, and it signs
counter increments, to prevent the cloning of keys. There the challenge using the private key.
should be only one instance of the key in the wild, and if
the counter isn’t sequential, authentication is denied. This This challenge is then communicated back to the server,
entire process can be seen in Figure 3. which then verifies the signed challenge with the public key,
and increments the counter. This entire process can be seen
in Figure 4.
Authentication Process
At a later time, the user lands on the site and wishes to au-
thenticate themselves. The server communicates back to the Set Up FIDO2 Auth in Azure AD
user a randomly generated challenge, which is just a string, Many websites support FIDO2. Microsoft has been a promi-
and a list of credential IDs for the user. nent participant in this ecosystem as well, along with
Google, AWS and many others. Azure AD fully supports
FIDO2 authentication for its users. This means that if your
application uses Azure AD for authentication, you can
make use of FIDO2 easily today. Additionally, you can also
lock down access to critical resources such as the Azure
Portal or Office 365 using FIDO2. Strong Auth that’s con-
venient for users is a win for everyone. If users use FIDO2,
they are less susceptible to MFA fatigue and accidentally
completing the MFA challenge. They’re also more secure
as a result.
Figure 3: The registration process
Let’s see how you can go about setting up FIDO2 authentica-
tion in Azure AD. To follow through these steps, you’ll need
a physical FIDO2 key. I’m using a YubiKey.

Start by logging into the Azure Portal at portal.azure.com


as a tenant admin, and then navigate to the Azure Active Di-
rectory blade. In that blade, look for Security, and navigate
to Authentication Methods. Here, under policies, choose
FIDO2 security key, and choose to enable it for select us-
ers. As you can see in Figure 5, I’ve chosen to enable it for
Figure 4: The authentication process testuser10.

Figure 5: Enabling testuser10 to use FIDO2 as an authentication method

10 FIDO2 and WebAuthn codemag.com


Note that, optionally, you can configure FIDO2 at the tenant
level under the Configure tab in the same area. This can be
seen in Figure 6. There are a number of settings here, and
you can allow users to set up FIDO keys themselves. The other
option is for an admin to set these keys for the user. You can
choose to enforce attestation or not. Usually, attestation is
useful in enterprise scenarios where you want to disallow cer-
tain keys from being used. However, using any key is better
than a username password, so it’s okay to leave this set to
“no”. You can restrict keys to certain well-known keys, so us-
ers don’t just buy their own keys and start registering them.
You’d do this by adding AAGUIDs of the keys. This creates a
huge management overhead, but it’s incredibly secure.

As you can see from Figure 6, I’ve allowed self-service set


up. Also, as you can see from Figure 5, I’ve enabled testus-
er10 to use FIDO2 as an authentication method. In a rela-
tively modern browser (I’m using Chrome), in a non-private
window, visit https://myprofile.microsoft.com and sign in
as the user you’ve enabled FIDO2 authentication for. In my
case, that’s testuser10@sahilmalikgmail.onmicrosoft.com.
Note that you may already have MFA enabled on this user, Figure 6: Configure FIDO2 settings at the tenant level.

Figure 7: Add an authentication method for the user.

and that’s okay—one user can have numerous authentica-


tion methods.

Once signed in, visit the “Security” section and click on Add
method, as shown in Figure 7.

When prompted, choose Security key as the authentica-


tion method you’d like to use, and click Add. You’ll then be
prompted to pick what kind of device you wish to use. This
can be seen in Figure 8.

I have a USB-C YubiKey, so I’ll pick “USB device”. Next, I’m


shown a message saying that I should have my key ready, Figure 8: Security key type
and when prompted, plug in the key and touch the key’s
sensor or button to finish setting it up. As soon as I click
Next, I’m redirected to a new window to finish set up. In my case, I use this key for numerous purposes, so it’s
locked with a PIN. As soon as I plug it in, I’m asked to enter
Here, Azure AD shows you a message, but the real authen- the PIN. Once I do that, the browser then asks me if I wish
tication dance is built into the browser using the WebAuthn to use this key with the given website, which is, in this case,
protocol. The browser now prompts you to plug your security login.microsoftonline.com. Although it’s not very difficult
key in. This can be seen in Figure 9. to build FIDO2 authentication right on your website, most

codemag.com FIDO2 and WebAuthn 11


identity providers already support this, and delegating this
responsibility to them is usually what we do these days any-
way. This can be seen in Figure 10.

To allow this key to be used with login.microsoftonline.com,


you now have to touch the key. This proves physical posses-
sion of the key. Remember: The key, if cloned, can easily be
detected using an ever-increasing counter.

As soon as I touch the key, I’m shown a third prompt, asking


me if I allow the site (in this case AAD), to see the details of
my key. Say Allow. This can be seen in Figure 11.

What’s interesting is that all this was built right into the
browser. AAD has been patiently waiting for your key to be
Figure 9: Chrome prompts you to plug your key in. registered before moving further. At this point, your key is
registered, and AAD asks you to name it. Giving it a mean-
ingful name, I called mine SahilKey, and I soon see a mes-
sage confirming that the key is ready for use. This can be
seen in Figure 12.

Now let’s see the sign-in experience.

Go ahead and sign out from myprofile.microsoft.com.


You can do so by clicking the person-like icon on the top
right-hand corner and choosing Signout. Now relaunch the
browser, and visit any site protected by the same AAD. I’ll
just use myprofile.microsoft.com again. Enter your user-
name, (testuser10 in my case), and pick “Sign in with Win-
dows Hello or a security key.”

Here’s a pet peeve. I’m on a Mac, and this system should be


Figure 10: Chrome prompting you if you smart enough to not confuse the user with “Windows hello”
wish to use your FIDO2 key with AAD on a Mac. But I digress.

I do have a security key, so I’ll click on that link. Chrome


now shows me a bunch of options to sign in using. This can
be seen in Figure 13.

The exact list you see may be different. You may also be
prompted for Bluetooth permissions at this point. I intend
to use a USB YubiKey, so I pick USB security key. Now
Chrome takes you through a simple sign-in process that in-
volves touching the key, entering a PIN, and boom, you’re
signed in.

See how easy that was? Not only that, when I signed in us-
ing FIDO2, I didn’t have to enter a password or remember a
password, and the server’s workload is also greatly reduced.
Plus, I never sent anything sensitive over the wire. It’s a
Figure 11: Sending information about the key win-win for all.

What if you lose the key? Well, you can always fall back to
a back-up authentication method, such as an authenticator
app in Azure AD. However, it’s also not atypical to register
more than one security key.

Summary
Passwords suck and I hate dealing with them. I like MFA, but
it’s so inconvenient to deal with MFA sometimes. FIDO2 is
supported by a number of organizations. And it really sim-
plifies the log-in process while keeping my credentials se-
cure. Some of the places I use FIDO2 already are Facebook,
Twitter, GitHub, my Azure and Google accounts, plus a few
Figure 12: FIDO2 key is ready for use with AAD. others. If you wish to see who supports FIDO2, visit www.

12 FIDO2 and WebAuthn codemag.com


®

Instantly Search
Terabytes

dtSearch’s document filters support:


• popular file types
• emails with multilevel attachments
Figure 13: Many ways to sign in.
• a wide variety of databases
• web data
dongleauth.info. You’d be pleasantly surprised to see how
many sites already support FIDO2.

No doubt this identity and security space will continue to


Over 25 search options including:
evolve, but everyone has, at this point, unanimously agreed • efficient multithreaded search
to kill passwords. If username password is your line of de-
fense, I have bad news for you. • easy multicolor hit-highlighting
• forensics options like credit card search
FIDO2 keys aren’t perfect. There’s a physical key that you
must carry. But with platform authenticators, and technolo-
gies such as FaceID that are better at identifying individuals
than fingerprints, the keys really are very compelling argu-
ment. Developers:
The best part is that almost every major identity provider
• SDKs for Windows, Linux, macOS
already supports it, and it’s not hard to set up. If I use a • Cross-platform APIs cover C++, Java
username password as the only protection on a website, I and recent .NET (through .NET 6)
just assume it to be insecure. I won’t use anything useful
if it at least doesn’t support MFA. But MFA is inconvenient. • FAQs on faceted search, granular data
So if a site gives me the option to use FIDO2 keys, that’s my classification, Azure, AWS and more
solution.

How about you? Do you have any critical sites that you use
just username password on?

 Sahil Malik
 Visit dtSearch.com for
• hundreds of reviews and case studies
• fully-functional enterprise and
developer evaluations

The Smart Choice for Text


Retrieval® since 1991
dtSearch.com 1-800-IT-FINDS

codemag.com FIDO2 and WebAuthn 13


ONLINE QUICK ID 2209031

YARP: I Did It Again


With developers becoming increasingly comfortable with microservices, reverse proxies have gained visibility. Inside
Microsoft, someone noticed that a number of teams were building reverse proxies for their own projects. Luckily, someone
realized that a single, reusable reverse proxy would be something that we could all benefit from. This led them to release

“Yet Another Reverse Proxy” or YARP. Let’s talk about what Reverse proxies are not only helpful in those microservices
reverse proxies are and how YARP works. projects. Here are some other reasons to use a reverse proxy:

This bring us to two important questions: “What is a Reverse • Service gatekeeping


Proxy?” and “How do I create a reverse proxy?” • Load balancing
• SSL termination
• Security
What’s a Reverse Proxy? • URL writing
If you’re like me, the word “proxy” is an overloaded term.
Shawn Wildermuth In different contexts, the word proxy means something dif- Although you might want to use a reverse proxy for all of
shawn@wildermuth.com ferent to different people. In this case, I’m talking about a these reasons, you don’t need all of these services. Use a
wildermuth.com server that’s an intermediary between the caller and the re- reverse proxy in the way your application works. You can use
@shawnwildermuth ceiver of a networking call (usually HTTP or similar). Before reverse proxies as a product (e.g., CloudFlare) or built into
you can understand a reverse proxy, let’s talk about forward your own projects.
Shawn Wildermuth has
been tinkering with com- proxies (or proxy servers, as you might be familiar with).
puters and software since Let’s look at a new support in .NET projects called YARP.
he got a Vic-20 back in the A proxy server is a server that takes requests and re-exe-
cutes the call to the Internet (or intranet) on behalf of the
early ’80s. As a Microsoft
MVP since 2003, he’s also original caller. This can be used for caching requests to im- Using YARP
involved with Microsoft prove speed of execution or for filtering content (as well The most obvious use-case for many of you reading this ar-
as an ASP.NET Insider and as other reasons). In Figure 1, you can see a typical proxy ticle is to use a reverse proxy to provide an API gateway for
ClientDev Insider. He’s server diagram. microservices. A reverse proxy can expose a server that rep-
the author of over twenty resents a single surface area for requests. The details of how
Pluralsight courses, written A reverse proxy is very much like a proxy server, but, not the service is implemented and where the actual service re-
eight books, an interna- too surprisingly, in reverse. Instead of intercepting calls go- sides are made opaque to the actual clients. This is what I call
tional conference speaker, ing outside the Internet/intranet, a reverse proxy intercepts service aggregation. In this case, a reverse proxy is used to
and one of the Wilder calls from the outside and forwards them to local servers. accept calls from clients and then pass them off to the under-
Minds. You can reach Often the proxy server is the only accessible server in this lying service (or cluster of services). This allows you to change
him at his blog at scenario. If you look at Figure 2, you can see that all calls the composition of the microservice without breaking clients.
http://wildermuth.com. come into the reverse proxy. Often the caller has no idea
He’s also making his first, that there’s a reverse proxy. You can use service aggregation to marry disparate systems
feature-length documentary
without having to rewrite or change the underlying tech-
about software developers
Now that you have a general idea of what a reverse proxy is, nology. For example, you might have a Java system from
today called “Hello World:
let’s talk about the why of reverse proxies. an acquisition, a .NET project that’s built in-house, and a
The Film.” You can see
more about it at Python machine learning project that you have to integrate.
By using a reverse proxy, you can create a union of all these
http://helloworldfilm.com. Do I Need a Reverse Proxy? services to provide a single API service area for these differ-
Many projects have no need for a reverse proxy. You should ent technologies.
learn about them anyway, because it’s another arrow in your
development quiver to use when you need it. The use-case Now that you’ve seen a bit about what a reverse proxy is,
for using a reverse proxy is fairly well defined. The reverse let’s see how to implement a reverse proxy it in a .NET Core
proxy can be used in microservice scenarios where you don’t project using the YARP library. To get started, you need any
want individual clients to know about the naming or topol- ASP.NET Core project. Let’s create an empty project (calling
ogy of your data center. it DidItAgain.Proxy):

> dotnet new web -n DidItAgain.Proxy

To use YARP, you just need to add the NuGet package:

> dotnet add package Yarp.ReverseProxy

Once installed, you can wire the middleware. First, you need
to add the reverse proxy services and configure it:

var bldr = WebApplication.CreateBuilder(args);

Figure 1: Proxy server bldr.Services.AddReverseProxy();

14 YARP: I Did It Again codemag.com


As you can see, you first add the proxy service dependencies
with AddReverseProxy. You need to configure it, but I’ll get
to that soon. Before you do that, let’s add the middleware:

var app = bldr.Build();

app.MapReverseProxy();

app.MapGet("/", () => "Hello World!");

app.Run();

Configuring the Reverse Proxy


In YARP, the reverse proxy needs to know what the pattern
is that you’re looking for in requests and where to pass the
requests to. It uses the term Routes for the request patterns
and uses Clusters to represent the computers(s) to forward
those requests. This means that you need a way of providing
the proxy with a set of Routes and Clusters. The most direct
is to use a section in your configuration files:

var proxy = bldr.Services.AddReverseProxy();


proxy.LoadFromConfig(
bldr.Configuration.GetSection("Yarp")); Figure 2: Reverse proxy

By calling the LoadFromConfig, the proxy expects a section


that conforms to the schema of the proxy configuration. It       "Path": "/api/customers/{**catch-all}"
doesn’t matter what you call the section, as long as it’s a     }
set of Routes and Clusters. For example, here’s the general   }
structure of the configuration section: }

{ A route (named CustomerRoute in this example) is a set of


... rules for matching the request and pointing to a Cluster via
  "Yarp": { the ClusterId. In this example, the route matches calls to
    "Routes": { the proxy server that start with /api/customers/ and di-
... rects them to the customer Cluster. Routes can match based
    }, on various criteria:
    "Clusters": {
... • Path (like you’ve just seen)
    } • Headers
  } • Query string parameters
} • HTTP method
• Host name
Let’s start with the Cluster:
This gives you a lot of control over how the reverse proxy
"Clusters": { matches URIs to other computers. Although typically used
  "CustomerCluster": { as a façade to your own servers, it can be used to proxy to
    "Destinations": { wherever you want.
      "customerServer": {
        "Address": "https://someurl.com/" Programmatic Configuration
      } Although using the configuration file is a common way to
    } configure the proxy server, often you want to have a data-
  } driven approach or integrate the proxy with a service discov-
} ery service (e.g., the Microsoft Tye project). To supply the
configuration file, you’ll need to create a class that imple-
A Cluster (named CustomerCluster) is just a destination ments the IProxyConfigProvider interface:
for an endpoint server(s). Note that there could be multiple
destinations and each could use different semantics to de- public class YarpProxyConfigProvider
termine where to locate an endpoint server and transform   : IProxyConfigProvider
it. Requests typically keep their paths and append them to {
the address. This is typically matched with a Route:   public IProxyConfig GetConfig()
  {
"Routes": {     return new YarpProxyConfig();
  "CustomerRoute": {   }
    "ClusterId": "CustomerCluster",
    "Match": { }

codemag.com YARP: I Did It Again 15


The provider requires you to implement a class that repre- You can see that the interface has three members. The Routes
sents the IProxyConfig interface. Although this interface is and Clusters return a list of the Route and Clusters (with the
simple, the IProxyConfig is where the building up of the same structure you see in the config file above). The ChangeTo-
configuration happens. For example: ken is used to notify the system of changes to the configura-
tion, if needed. Creation of the clusters looks like you’d expect:
public class YarpProxyConfig : IProxyConfig
{ private List<ClusterConfig> GenerateClusters()
  readonly List<RouteConfig> _routes; {
  readonly List<ClusterConfig> _clusters;   var collection = new List<ClusterConfig>();
  readonly CancellationChangeToken _changeToken;   collection.Add(new ClusterConfig()
  readonly CancellationTokenSource _cts =   {
new CancellationTokenSource();     ClusterId = "FirstCluster",
    Destinations =
  public YarpProxyConfig() new Dictionary<string, DestinationConfig>{
  {         {
    _routes = GenerateRoutes();           "server",
    _clusters = GenerateClusters();           new DestinationConfig()
_cts = new CancellationTokenSource()           {
    _changeToken = new             Address = "https://someserver.com"
CancellationChangeToken(_cts.Token);           }
  }         }
      }
  public IReadOnlyList<RouteConfig> Routes   });
=> _routes;   return collection;
  public IReadOnlyList<ClusterConfig> Clusters }
=> _clusters;
  public IChangeToken ChangeToken => Although I’m hard-coding the configuration (which is re-
_changeToken; ally not any better than configuration files), you could use
} code to determine how the clusters should be configured.
It’s similar to create routes:

private List<RouteConfig> GenerateRoutes()


{
ADVERTISERS INDEX   var collection = new List<RouteConfig>();
  collection.Add(new RouteConfig()
  {
Advertisers Index     ClusterId = "FirstCluster",
    Match = new RouteMatch()
CODE Consulting     {
www.codemag.com/code 7       Path = "/api/foo/{**catch-all}"
    }
CODE Consulting   });
www.codemag.com/onehourconsulting 75
  return collection;
CODE Legacy Modernize
}
www.codemag.com/modernize 69
CODE Legacy Beach Again, this should look a lot like the configuration file ex-
www.codemag.com/modernize 76 ample. There’s a difference in how you wire-up the services
for the reverse proxy:
Component Source
www.componentsource.com/compare 19
Advertising Sales: using DidItAgain.Proxy;
Tammy Ferguson DevIntersection using Yarp.ReverseProxy.Configuration;
832-717-4445 ext 26
tammy@codemag.com
www.devintersection.com 2
var bldr = WebApplication.CreateBuilder(args);
dtSearch
www.dtSearch.com 13
bldr.Services.AddTransient<IProxyConfigProvider,
LEAD Technologies YarpProxyConfigProvider>();
www.leadtools.com 5
bldr.Services.AddReverseProxy();
Live on Maui
www.live-on-maui.com 61 var app = bldr.Build();
This listing is provided as a courtesy
to our readers and advertisers. Notice that you’re adding your provider into the services
The publisher assumes no responsibi- collection and adding the reverse proxy. When it’s con-
lity for errors or omissions. structed, it queries for the proxy config provider on its own
and finds yours.

16 YARP: I Did It Again codemag.com


Now that you’ve seen how to configure it, let’s talk about
how to configure the proxy for different features. From now
on, I go back to the configuration file because it’s easier to
show you how the Clusters and Routes are defined.

Load Balancing
An important use of reverse proxies is to provide general-
ized load balancing. Again, this allows the reverse proxy to
forward requests to more than one server that supplies a
specific service. Now you can scale out transparently to the
clients of your service(s). Although load balancing is avail-
able as a service in many cloud-deployed solutions, in some
cases, you’d want more control over it (or you’d use the load
balancing support indirectly).

When I say load balancing, I don’t mean just sharing load


between servers. There are different strategies to load bal-
ancing. For example, Figure 3 shows a typical round-robin
load balancing where calls are passed off to different serv-
ers in a linear fashion.

There are more strategies for load balancing, but this is


probably the most common scenario.
Figure 3: Round Robin Load Balancing
To implement load balancing, you need to specify the load
balancing type in the cluster:
  "SessionAffinity": {
"CustomerCluster": {     "Enabled": true
  "Destinations": {   }
    "customerServer1": { ... }, }
    "customerServer2": { ... }
  }, This tracks affinity with a cookie, although you can change
  "LoadBalancingPolicy": "RoundRobin" the behavior to use a header instead, as well as adding
} other parameters. By using these two options of the clus-
ter, you can control the behavior of load balancing in the
The supported policies are: reverse proxy.

• PowerOfTwoChoices (default): Picks two random des- To enable load balancing or session affinity, you’ll need to
tinations and picks the one with the least number of opt in during the mapping of the proxy server:
requests.
• FirstAlphabetical: Picks the next destination based app.MapReverseProxy(opt => {
on name (useful for failover instead of sharing load).   opt.UseLoadBalancing();
• Random: Picks a random server without regard for   opt.UseSessionAffinity();
load. });
• RoundRobin: Picks a server by going in order without
regard for load. With this, you can add only the features you want to use.
• LeastRequests: Picks a server based on the small-
est number of requests, but does require that it scan SSL Termination
through each destination. This is the slowest but has Most of the websites that you visit use SSL now to ensure
the highest likelihood of dealing with overloaded end-to-end encryption of any data. This is a good thing. A
servers. reverse proxy has this option to do something called SSL
Termination. This is just a fancy name for not using SSL in-
Although load balancing can help you achieve scalability, side a data center. As you can see in Figure 4, the SSL call
it doesn’t do this by knowing about your servers. If you’re terminates with the proxy server.
completely stateless in those servers, just using the load
balancing policy is all you need. But sometimes you have SSL Termination allows you to decide whether you need en-
state (e.g., server state or session state) on the servers and cryption to call the proxied servers. Often, within a data
need to lock a client to a server once it’s been picked. To do center (or cluster), requests are forwarded without SSL so
this, you can enable SessionAffinity: that you can avoid having to manage certificates for each
server cluster. Whether you use SSL is just a matter of what
"CustomerCluster": { the cluster destination URL is:
  "Destinations": {
    "customerServer1": { ... }, "Clusters": {
    "customerServer2": { ... }   "CustomerCluster": {
  },     "Destinations": {
  "LoadBalancingPolicy": "RoundRobin",       "customerServer": {

codemag.com YARP: I Did It Again 17


   
"ClusterId": "CustomerCluster",
   
"Match": {
   
  "Path": "/api/customers/{**catch-all}"
   
},
   
"Transforms": {
   
  "PathPattern":
"/api/v2/customers/{**remainder}"
    }
  }
},

In this case, it replaces the path with a new URL and any-
thing in the catch-all is added as the suffix. In this example,
the transform could be used to redirect to a versioned API.
The types of transforms include:

• Path prefix: Supports removing or adding a prefix to


the request path.
• Path set: Replaces a path with a static path.
• Path pattern: Like the example, allows you to use pat-
tern matching to recreate the endpoint URL.
• Query strings: Add, remove, or convert query strings
Figure 4: SSL Termination to other parts of the request (path, query string, or
header).
• HTTP method: Allows you to change the HTTP method
Source Code         // http for no SSL or https for SSL before it’s sent to the endpoint server.
        "Address": "http://someurl.com/" • Headers: Allows you to have complex changes to
The source code can
      } headers that are added/removed before a request is
be downloaded at https://
github.com/wilder-minds/     } sent to the endpoint server.
yarp-code-magazine   }
} With the transformation support, you can really control how
the requests are formatted when you’re forwarding the re-
Security quest to the endpoint server.
In most cases, you don’t have to do anything special to en-
able security through the proxy server. If UseAuthentica-
tion and/or UseAuthorization are enabled, the proxy server Where Are We?
forwards most credentials to the endpoint servers. Let’s I hope, at this point, that you’ve seen the benefit of using a
look at different types of authentication types: proxy server and, by extension YARP. This utility server can
be plugged into your architectures to solve a series of dif-
• Cookies, Bearer Tokens, API Keys: As they are part of ferent problems. I hope you find that YARP is easy to add to
the request, they’ll be forwarded. a server and easy to configure.
• OAuth2, OpenIdConnect, WsFederation: As long as
they are configured as cookies, they flow through to  Shawn Wildermuth
the endpoint servers. 
• Windows, Negotiate, NTLM, Kerberos: These au-
thentication schemes are network connection based.
Because the connection is through the reverse proxy,
they aren’t supported in YARP’s reverse proxy.

In most cases, authentication flows through to the endpoint


servers. I’d be consistent with testing your authentication
schemes, though.

URL Rewriting
In some cases, you may want to change the URL before it’s
sent to the endpoint server. The reasons for this vary, but
one common case is to allow for a change to the API with-
out having to change the endpoint API server’s syntax. To
do this, you’ll want to introduce transforms into the con-
figuration. Transforms are added to the Routes so that it
is transformed before passing it to a Cluster. For example,
if you need to change the URL path, you can do it with a
transform:

"Routes": {
  "CustomerRoute": {

18 YARP: I Did It Again codemag.com


ONLINE QUICK ID 2209041

Simplifying ADO.NET Code in .NET 6:


Part 2
In the last article (Simplifying ADO.NET Code in .NET 6: Part 1), you wrote code to simplify ADO.NET and map columns to
properties in a class just like ORMs such as the Entity Framework do. You learned to use reflection to make creating a collection
of entity objects from a data reader and take advantage of attributes such as [Column] and [NotMapped]. In this article, you’re

going to refactor the code further to make it even more Simplify the Product Repository Class
generic. In addition, you’ll learn to get data from a view, get Now that you have a RepositoryBase class with all of the
a scalar value, handle multiple result sets, and call stored methods moved from the ProductRepository class, you can
procedures. greatly simplify the ProductRepository class by having it
inherit from the RepositoryBase class. Modify the Product-
Repository.cs file to look like Listing 2.
Refactor the Code for Reusability
In the last article (CODE Magazine, July/August 2022), you In the ProductRepository class you must accept a database
added methods to the ProductRepository class to read prod- context object in the constructor because without one,
Paul D. Sheriff uct data from the SalesLT.Product table in the Adventure- there’s no way you could interact with the Product table. A
http://www.pdsa.com WorksLT database. If you look at this code, all of it is com- specific Search() method is created to return a list of Prod-
pletely generic and can be used for any table. As such, this uct objects in the ProductRepository class, but it simply uses
Paul has been in the IT code should be moved to a base class from which you can the generic Search<TEntity>() method from the Repository-
industry over 35 years. In inherit. You can then have a ProductRepository, Customer- Base class.
that time, he has success- Repository, EmployeeRepository, and other classes that can
fully assisted hundreds Add Database Context Class for the
all inherit from the base class yet add functionality that’s
of company’s architect
specific for each table. AdventureWorksLT Database
software applications to
Instead of using the generic DatabaseContext or SqlServer-
solve their toughest busi-
ness problems. Paul has Create a Repository Base Class DatabaseContext classes directly, it’s a better practice to
been a teacher and mentor Right mouse-click on the Common folder and create a new create a database context class for each database you wish
through various mediums class named RepositoryBase and add the code shown List- to interact with. Right mouse-click on the project and add a
such as video courses, ing 1. Notice that the properties are the same as what you new folder named Models. Right mouse-click on the Models
blogs, articles and speaking previously added to the ProductRepository class. The con- folder and add a new class named AdvWorksDbContext that
engagements at user groups structor for this class must be passed the generic Databas- inherits from the SqlServerDatabaseContext class, as shown
and conferences around eContext class. After setting the DbContext property, the in Listing 3.
the world. Paul has many Init() method is called to initialize all the properties to a
courses in the www.plural- valid start state. The AdvWorksDbContext class inherits from the SqlServer-
sight.com library (http:// DatabaseContext because the AdventureWorksLT database
www.pluralsight.com/ Add Search() Method Just for Products you are interacting with is in a SQL Server. An instance of
author/paul-sheriff) on Add a Search() method to the RepositoryBase class just be- the ProductRepository class is created in the Init() method
topics ranging from .NET 6, low the Init() method. This method is different from the and exposed as a public property named Products. The Adv-
LINQ, JavaScript, Angular, Search() method previously written in the ProductReposito- WorksDbContext is passed to the constructor of the Product-
MVC, WPF, ADO.NET, jQuery, ry class because it removes the using around the SqlServer- Repository class because it needs the services of a database
and Bootstrap. Contact Paul DatabaseContext. context to perform its functions against the Product table.
at psheriff@pdsa.com.
public virtual List<TEntity> Try It Out
C Search<TEntity>() { Now that you have made these changes, let's ensure that
List<TEntity> ret; you can still retrieve all records from the Product table.
Open the Program.cs file and add a new using statement at
// Build SQL from Entity class the top of the file.
SQL = BuildSelectSql<TEntity>();
// Create Command Object with SQL using AdoNetWrapperSamples.Models;
DbContext.CreateCommand(SQL);
// Get the list of entity objects Remember that you removed the using from the Search()
ret = BuildEntityList<TEntity> method in the RepositoryBase class? You’re now going to
(DbContext.CreateDataReader()); create the using wrapper around the AdvWorksDbContext
class to have all objects disposed of properly once you’ve
return ret; retrieved all records.
}
Remove all the lines of code from where you create the
You now need to move the BuildEntityList(), BuildCollumn- ProductRepository class and the call to the Search() meth-
Collection() and the BuildSelectSql() methods from the od. Add in the code shown in the snippet below. You can
ProductRepository class into this new RepositoryBase class. now see the using statement that wraps up the instance

20 Simplifying ADO.NET Code in .NET 6: Part 2 codemag.com


of the AdvWorksDbContext class. This code should look fa- Listing 1: Add a base class for all the code that does not change between all repository classes
miliar if you have used the Entity Framework (EF), as this
#nullable disable
is typically how you interact with DbContext classes you
create with EF. using System.ComponentModel.DataAnnotations
.Schema;
using AdvWorksDbContext db = new(ConnectString); using System.Data;
using System.Reflection;
using System.Text;
List<Product> list = db.Products.Search();
namespace AdoNetWrapper.Common;
Console.WriteLine("*** Get Product Data ***");
public class RepositoryBase {
// Display Data public RepositoryBase(
foreach (var item in list) { DatabaseContext context) {
Console.WriteLine(item.ToString()); DbContext = context;
} Init();
}
Console.WriteLine();
Console.WriteLine( protected readonly DatabaseContext DbContext;
$"Total Items: {list.Count}");
Console.WriteLine(); public string SchemaName { get; set; }
public string TableName { get; set; }
Console.WriteLine( public string SQL { get; set; }
$"SQL Submitted: {db.Products.SQL}"); public List<ColumnMapper> Columns { get; set; }
Console.WriteLine();
protected virtual void Init() {
SchemaName = "dbo";
Run the console application and you should see the com- TableName = string.Empty;
plete list of product objects displayed. In addition, you SQL = string.Empty;
should see the SQL statement submitted by the classes you Columns = new();
}
created in this article. }

Searching for Data


In addition to retrieving all records, you probably want to Listing 2: Modify the ProductRepository class to pass a Product object to the Search() method
add a WHERE clause to filter the records based on some #nullable disable
condition. For example, you might wish to locate all Prod-
using AdoNetWrapper.Common;
uct records where the Name column starts with a specific using AdoNetWrapperSamples.Models;
character and the ListPrice column contains a value greater using AdoNetWrapperSamples.EntityClasses;
than a specific value. You want to have the wrapper classes
generate a SQL statement that looks like the following. namespace AdoNetWrapperSamples
.RepositoryClasses;
SELECT * FROM SalesLT.Product public class ProductRepository
WHERE Name LIKE @Name + '%' : RepositoryBase {
AND ListPrice >= @ListPrice public ProductRepository(
AdvWorksDbContext context)
: base(context) { }
You need to add some new functionality to create this SQL
statement. You need to pass in values to fill into the @ public virtual List<Product> Search() {
Name and @ListPrice parameters. You also need to specify return base.Search<Product>();
}
what the operators (=, LIKE, or >=) are for each expression. }
For example, you need to put a LIKE operator for the @
Name parameter and a greater-than or equal-to (>=) opera-
tor for the @ListPrice parameter.
Listing 3: Create a DbContext class for each database you wish to interact with
Add a Product Search Class #nullable disable
To pass in the values to the Search() method, create a
using AdoNetWrapper.Common;
class to hold the parameters you wish to use for the WHERE using AdoNetWrapperSamples
clause. Right mouse-click on the project and add a new fold- .RepositoryClasses;
er named SearchClasses. Right mouse-click on the Search-
Classes folder and add a new class named ProductSearch namespace AdoNetWrapperSamples.Models;
that looks like the code below. public partial class AdvWorksDbContext
: SqlServerDatabaseContext {
#nullable disable public AdvWorksDbContext(string connectString)
: base(connectString) { }
using AdoNetWrapper.Common; protected override void Init() {
base.Init();
namespace AdoNetWrapperSamples.SearchClasses;
Products = new(this);
}
public class ProductSearch {
[Search("LIKE")] public ProductRepository Products { get; set; }
public string Name { get; set; } }

codemag.com Simplifying ADO.NET Code in .NET 6: Part 2 21


[Search(">=")] tor to use. Open the ColumnMapper.cs file in the Common
public decimal? ListPrice { get; set; } folder and add a ParameterValue property and a SearchOp-
} erator property.

Create the Name and ListPrice properties to use for search- public class ColumnMapper {
ing. All properties in this class should be nullable unless public string ColumnName { get; set; }
you wish to require the user to enter at least one search public PropertyInfo PropertyInfo { get; set; }
value prior to searching for records. All properties should be public object ParameterValue { get; set; }
decorated with the [Search] attribute unless you just wish public string SearchOperator { get; set; }
to use an equal (=) operator in the WHERE clause. }

Add a Search Attribute Class Add Method to Build Search Column Collection
Microsoft doesn’t have a [Search] attribute, so it’s up to you Open the RepositoryBase.cs file and add a new method
to create one. Right mouse-click on the Common folder and named BuildSearchColumnCollection(), as shown in List-
add a new class named SearchAttribute, as shown in the ing 4. This method is just like the BuildColumnCollection()
following code snippet. method you wrote in the last article. Create an array of Prop-
ertyInfo objects for each property in the TSearch class. Loop
#nullable disable through the array of properties and retrieve the value for
the current property of the search class. If the value is filled
namespace AdoNetWrapper.Common; in, create a new ColumnMapper object. Check for a [Search]
attribute and if found, see if the ColumnName and/or the
[AttributeUsage(AttributeTargets.Property)] SearchOperator property exists. Override those properties
public class SearchAttribute : Attribute { in the ColumnWrapper object if they do exist. Add the new
public string SearchOperator { get; set; } ColumnWrapper object into the ret variable to be returned
public string ColumnName { get; set; } once all properties in the search class are processed.

public SearchAttribute(string searchOperator) { Add Method to Create WHERE Clause for Searching
SearchOperator = searchOperator ?? "="; The next new method is used to build the actual WHERE
} clause to be added to the SELECT statement. Add a new
} method named BuildSearchWhereClause(), as shown in
Listing 5. Pass to this method the list of ColumnWrapper
There are two properties needed for this attribute class, objects created using the BuildSearchColumnCollection()
SearchOperator and ColumnName. The SearchOperator method. Iterate over the list of objects and build the WHERE
property is assigned to an equal sign (=) if one isn’t sup- clause. Be careful when copying the code from the article as
plied. If the ColumnName property is a null, the code you’re I had to break lines in the sb.Append() due to formatting of
going to use to create the WHERE clause will use the prop- the article. The interpolated string belongs all on one line
erty name of the search class. with a space between each item except between the Param-
eterPrefix and the ColumnName properties.
Modify the ColumnWrapper Class
When building the collection of columns needed for the Add Method to Create Parameters for Command Object
WHERE clause, the process is going to be like the code used The last new method to build is called BuildWhereClause-
to build the columns for the SELECT statement. However, Parameters(), as shown in Listing 6. In this method, you
you’re going to need two additional items to keep track of: iterate over the same collection of ColumnMapper objects
the value to supply as a parameter and for the search opera- you created in the BuildSearchColumnCollection() method.

Listing 4: Create method to build collection of properties for the search columns
protected virtual List<ColumnMapper> ParameterValue = value
BuildSearchColumnCollection<TEntity, };
TSearch>(TSearch search) {
List<ColumnMapper> ret = new(); // Does Property have a [Search] attribute
ColumnMapper colMap; SearchAttribute sa = prop
object value; .GetCustomAttribute<SearchAttribute>();
if (sa != null) {
// Get all the properties in <TSearch> // Set column name from [Search]
PropertyInfo[] props = colMap.ColumnName =
typeof(TSearch).GetProperties(); string.IsNullOrWhiteSpace(sa.ColumnName)
? colMap.ColumnName : sa.ColumnName;
// Loop through all properties colMap.SearchOperator =
foreach (PropertyInfo prop in props) { sa.SearchOperator ?? "=";
value = prop.GetValue(search, null); }

// Is the search property filled in? // Create collection of columns


if (value != null) { ret.Add(colMap);
// Create a column mapping object }
colMap = new() { }
ColumnName = prop.Name,
PropertyInfo = prop, return ret;
SearchOperator = "=", }

22 Simplifying ADO.NET Code in .NET 6: Part 2 codemag.com


Each time through, build a new SqlParameter object passing Listing 5: Add a method to build a WHERE clause for searching
in the column name and either the value to submit by itself,
protected virtual string BuildSearchWhereClause
or if the SearchOperator property is equal to "LIKE", you use (List<ColumnMapper> columns) {
the value and add on a percent sign (%).
StringBuilder sb = new(1024);
Overload Search() Method to Accept a Command Object string and = string.Empty;
Add a new overload for the Search() method to accept a // Create WHERE clause
Command object (Listing 7). This Search() method checks sb.Append(" WHERE");
to ensure that the Columns collection has been built from foreach (var item in columns) {
the TEntity class. It then sets the DbContext.CommandOb- sb.Append($"{and} {item.ColumnName}
ject property to the cmd object variable passed in. The {item.SearchOperator}
{DbContext.ParameterPrefix}
BuildEntityList() method is then called to create the list of {item.ColumnName}");
entity objects. and = " AND";
}
Modify the original Search() method to call the new overload
return sb.ToString();
you just created, as shown in the following code snippet. }

public virtual List<TEntity> Search<TEntity>() {


// Build SQL from Entity class
Listing 6: Add a method to build the parameters for the WHERE clause
SQL = BuildSelectSql<TEntity>();
protected virtual void BuildWhereClauseParameters
// Create Command Object with SQL (IDbCommand cmd,
List<ColumnMapper> whereColumns) {
DbContext.CreateCommand(SQL);
// Add parameters for each value passed in
return Search<TEntity>(DbContext.CommandObject); foreach (ColumnMapper item in whereColumns) {
} var param = DbContext.CreateParameter(
item.ColumnName,
item.SearchOperator == "LIKE" ?
Overload Search() Method to Accept Search Class item.ParameterValue + "%" :
Open the RepositoryBase.cs file and add another over- item.ParameterValue);
loaded Search() method that takes two type parameters cmd.Parameters.Add(param);
TEntity and TSearch, as shown in Listing 8. After build- // Store parameter info
ing the SELECT statement, call the BuildSearchColumnCol- Columns.Find(c => c.ColumnName ==
lection() method that uses the TSearch class to build a item.ColumnName)
collection of columns to be used in the WHERE clause. If .ParameterValue = item.ParameterValue;
}
there are any search columns, call the BuildSearchWhere- }
Clause() to build the actual WHERE clause to add to the
SELECT statement. The SqlCommand object is built using
the new SELECT clause, and then parameters are added
with the values from the TSearch object. The SqlCommand Listing 7: Add a Search() method that accepts a Command object
object is then passed to the Search() method that accepts public virtual List<TEntity>
the command object. Search<TEntity>(IDbCommand cmd) {
List<TEntity> ret;
Modify Product Repository Class // Build Columns if needed
Now that you have the generic version of the Search() if (Columns.Count == 0) {
method to accept a search entity object, you need to add a Columns = BuildColumnCollection<TEntity>();
Search() method to the ProductRespository class to accept }
a ProductSearch class. Open the ProductRepository.cs file // Set Command Object
and add a new using statement at the top of the file. DbContext.CommandObject = cmd;

using AdoNetWrapperSamples.SearchClasses; // Get the list of entity objects


ret = BuildEntityList<TEntity>
(DbContext.CreateDataReader());
Add a new Search() method to the ProductRepository class
to call the Search<TEntity, TSearch>() method in the Re- return ret;
positoryBase class. }

public virtual List<Product>


Search(ProductSearch search) { Add an instance of the ProductSearch class and initial-
return base ize the Name property to the value "C", and the ListPrice
.Search<Product, ProductSearch>(search); property to be 50. Call the overloaded Search() method
} you just added to the ProductRepository class and pass in
the instance of the ProductSearch class as shown in the
Try It Out following code.
Open the Program.cs file and add a new using statement at
the top of the file so you can use the ProductSearch class. using (AdvWorksDbContext db = new(ConnectString));

using AdoNetWrapperSamples.SearchClasses; ProductSearch search = new() {

codemag.com Simplifying ADO.NET Code in .NET 6: Part 2 23


Name = "C", Modify the constructor of the AdvWorksDbContext class to
ListPrice = 50 pass in the current instance of AdvWorksDbContext to the
}; RepositoryBase class instance called Database.

List<Product> list = public virtual void Init() {


db.Products.Search(search); Database = new(this);
Products = new(this);
// REST OF THE CODE HERE }

Run the console application and you should see three prod- Building Your Own Command Object
ucts displayed, as shown in Figure 1. Open the Program.cs file and create a SQL string with the
same WHERE clause you created earlier (Listing 9). Create a
SqlCommand object by calling the CreateCommand() method
Create Generic Method to Submit SQL and pass in the sql variable. Add the parameters to the com-
Sometimes you may need a way to submit any SQL statement mand object and pass in some hard-coded values. Call the
to the database and have it return any list of objects you Search<Product>(cmd) method directly to retrieve the list of
want. Maybe you want to submit some SQL that has a few rows in the Product table that match the search criteria.
tables joined together. Into which repository class would
you want to put that? Instead of worrying about where it Try It Out
belongs, you can create a Database property on the Adv- Run the console application and you should see three prod-
WorksDbContext class that’s of the type RepositoryBase and ucts displayed, as shown in Figure 2.
just submit the SQL using a SqlCommand object. Open the
AdvWorksDbContext.cs file and add a new property of the
type RepositoryBase. Retrieve Data from a View
Now let's retrieve the data from a view in the Adventure-
public RepositoryBase Database { get; set; } WorksLT database named vProductAndDescription. If this

Figure 1: Build a WHERE clause to limit the total records returned

Listing 8: Create an overloaded Search() method to accept a Product Search class


public virtual List<TEntity> }
Search<TEntity, TSearch>(TSearch search) {
// Build SQL from Entity class // Create Command Object with SQL
SQL = BuildSelectSql<TEntity>(); DbContext.CreateCommand(SQL);

// Build collection of ColumnMapper objects // Add any Parameters?


// from properties in the TSearch object if (searchColumns != null &&
var searchColumns = searchColumns.Any()) {
BuildSearchColumnCollection<TEntity, BuildWhereClauseParameters(
TSearch>(search); DbContext.CommandObject, searchColumns);
}
if (searchColumns != null &&
searchColumns.Any()) { return Search<TEntity>(DbContext.CommandObject);
// Build the WHERE clause for Searching }
SQL += BuildSearchWhereClause(searchColumns);

24 Simplifying ADO.NET Code in .NET 6: Part 2 codemag.com


Figure 2: Add a WHERE clause to your SQL by using a search class and the [Search] attribute.

Listing 9: Create a SQL statement and a Command object to submit a search


using AdvWorksDbContext db = new(ConnectString); // Call the SELECT statement
List<Product> list =
string sql = "SELECT * FROM SalesLT.Product "; db.Database.Search<Product>(cmd);
sql += "WHERE Name LIKE @Name + '%'";
sql += " AND ListPrice >= @ListPrice"; Console.WriteLine("*** Get Product Data ***");
// Display Data
// Create Command object foreach (var item in list) {
var cmd = db.CreateCommand(sql); Console.WriteLine(item.ToString());
// Add Parameters }
cmd.Parameters.Add( Console.WriteLine();
db.CreateParameter("Name", "C")); Console.WriteLine($"Total Items:
cmd.Parameters.Add( {list.Count}");
db.CreateParameter("ListPrice", 50)); Console.WriteLine();

view isn’t already in the AdventureWorksLT database, create Listing 10: Add an Entity class to map the results returned from the view
it using the following SQL: #nullable disable

CREATE VIEW vProductAndDescription AS using System.ComponentModel


SELECT p.ProductID, p.Name, .DataAnnotations.Schema;
pm.Name AS ProductModel, namespace AdoNetWrapperSamples.EntityClasses;
pmd.Culture, pd.Description
FROM SalesLT.Product AS p [Table("vProductAndDescription",
Schema = "SalesLT")]
INNER JOIN SalesLT.ProductModel AS pm
public partial class ProductAndDescription {
ON p.ProductModelID = pm.ProductModelID public int ProductID { get; set; }
INNER JOIN public string Name { get; set; }
SalesLT.ProductModelProductDescription AS pmd public string ProductModel { get; set; }
public string Culture { get; set; }
ON pm.ProductModelID = pmd.ProductModelID public string Description { get; set; }
INNER JOIN
SalesLT.ProductDescription AS pd public override string ToString() {
ON pmd.ProductDescriptionID = return $"Name={Name} -
ProductModel={ProductModel} -
pd.ProductDescriptionID; Description={Description}";
}
Add a new class named ProductAndDescription to map to }
the vProductAndDescription view. Right mouse-click on the
EntityClasses folder and add a new class named Produc-
tAndDescription, as shown in Listing 10. foreach (var item in list) {
Console.WriteLine(item.ToString());
Try It Out }
Open the Program.cs file and modify the code to call the Console.WriteLine();
view using the Search() method on the Database property. Console.WriteLine(
$"Total Items: {list.Count}");
using AdvWorksDbContext db = new(ConnectString); Console.WriteLine();
Console.WriteLine(
// Get all rows from view $"SQL Submitted: {db.Database.SQL}");
List<ProductAndDescription> list =
db.Database.Search<ProductAndDescription>(); Run the console application and you should see over 1700
rows appear from the view. Many of these have a bunch of
Console.WriteLine("*** Get Product Data ***"); questions marks. This is because the data in the table has
// Display Data some foreign language characters.

codemag.com Simplifying ADO.NET Code in .NET 6: Part 2 25


Listing 11: The Find() method retrieves a single entity from the table
public virtual TEntity Find<TEntity> i++) {
(params Object[] keyValues) searchColumns[i].ParameterValue =
where TEntity : class { keyValues[i];
// To assign null, use 'where TEntity : class' searchColumns[i].SearchOperator = "=";
TEntity ret = null; }

if (keyValues != null) { // Build the WHERE clause for Searching


List<ColumnMapper> searchColumns; SQL += BuildSearchWhereClause(searchColumns);

// Build SQL from Entity class // Create command object with SQL
SQL = BuildSelectSql<TEntity>(); DbContext.CreateCommand(SQL);

// Build a collection of ColumnMapper // Add any Parameters?


// objects based on [Key] attribute if (searchColumns != null &&
searchColumns = Columns searchColumns.Any()) {
.Where(col => col.IsKeyField).ToList(); BuildWhereClauseParameters(
DbContext.CommandObject, searchColumns);
// Number of [Key] attributes on entity class }
// must match number of key values passed in
if (searchColumns.Count != keyValues.Length) { // Get the entity
throw new ApplicationException( ret = Find<TEntity>(DbContext.CommandObject);
"Not enough parameters passed to Find() }
method, or not enough [Key] attributes
on the entity class."); return ret;
} }

// Set the values into the searchColumns


for (int i = 0; i < searchColumns.Count;

Search Using a View Try It Out


Just like you created a search class for the Product table, Modify the code in Program.cs to create an instance of this
you can also create a search class for searching when using new search class. Set the Culture property to the value
a view. Right mouse-click on the SearchClasses folder and "en" so you only grab those records where the Culture field
add a new class named ProductAndDescriptionSearch, as matches this value. Call the overload of the Search() meth-
shown in the code snippet below. od to which you pass a search class.

#nullable disable ProductAndDescriptionSearch search = new() {


Culture = "en",
using AdoNetWrapper.Common; };

namespace AdoNetWrapperSamples.SearchClasses; // Perform a search for specific culture


List<ProductAndDescription> list =
public class ProductAndDescriptionSearch { db.Database.Search<ProductAndDescription,
[Search("=")] ProductAndDescriptionSearch>(search);
public string Culture { get; set; }
} Run the console application and you should see almost 300
rows of data returned from the view.

Listing 12: The overload of the Find() method executes the command
public virtual TEntity
Find a Single Product
Find<TEntity>(IDbCommand cmd) Now that you’ve learned how to create a WHERE clause, you
where TEntity : class { can use this same kind of code to locate a record by its pri-
// To assign null, use 'where TEntity : class' mary key. The ProductID column in the SalesLT.Product table
TEntity ret = null;
is the primary key, so you want to create a SELECT statement
// Build Columns if needed that looks like the following:
if (Columns.Count == 0) {
Columns = BuildColumnCollection<TEntity>(); SELECT * FROM SalesLT.Product
}
WHERE ProductID = @ProductID
// Get the entity
var list = Search<TEntity>(cmd); Use the [Key] Attribute
To do this, you must identity the property in the Product
// Check for a single record class that holds the primary key. You’re going to do this
if (list != null && list.Any()) {
// Assign the object to the return value using the [Key] attribute class that .NET provides. Open the
ret = list[0]; Product.cs file and add a using statement.
}
using System.ComponentModel.DataAnnotations;
return ret;
}
Add the [Key] attribute above the Id property.

26 Simplifying ADO.NET Code in .NET 6: Part 2 codemag.com


[Key] Listing 13: The Find() method returns a null if the record is not found, or it returns a valid
[Column("ProductID")] entity object
public int Id { get; set; } using AdvWorksDbContext db = new(ConnectString);

Open the ColumnMapper.cs file and add a new property Product entity = db.Products.Find(706);
called IsKeyField so that as you are looping through and
Console.WriteLine("*** Get Product Data ***");
building the list of properties, you can set this Boolean if (entity == null) {
property to true for the property decorated with the [Key] Console.WriteLine(
attribute. "Can't Find Product ID=706");
}
else {
public bool IsKeyField { get; set; } // Display Data
Console.WriteLine(entity.ToString());
Open the RepositoryBase.cs file and add a using statement Console.WriteLine();
at the top of the file. Console.WriteLine(
$"SQL Submitted: {db.Products.SQL}");
}
using System.ComponentModel.DataAnnotations; Console.WriteLine();

Locate the BuildColumnCollection() method and just below


the code where you check for a ColumnAttribute and set the
colMap.ColumnName, add the following code to check for file and add the Find() method that accepts an integer value
the [Key] attribute: that relates to the ProductID field in the Product table.

// Is the column a primary [Key]? public virtual Product Find(int id) {


KeyAttribute key = prop return base.Find<Product>(id);
.GetCustomAttribute<KeyAttribute>(); }
colMap.IsKeyField = key != null;
Try It Out
Add a Find() Method Open the Program.cs file and change the code to call
Add a new method named Find() to the RepositoryBase the Find() method, as shown in Listing 13. This method
class, as shown in Listing 11. This method has the same should check to ensure that a single entity class is re-
signature as the LINQ Find() method, where you pass in one turned. If the value returned is null, write a message into
or more values to a parameter array. Most tables only have the console window, otherwise, write the product entity
a single field as their primary key, but in case a table has a into the console window. Run the console application and
composite key, you need to have a parameter array for those you should see a single product object displayed. You may
additional values. need to change the product ID to match an ID from your
SalesLT.Product table.
The BuildSelectSql() method creates the SELECT statement,
and the Columns property. Next, the searchColumns vari-
able is created as a list of ColumnMapper objects with just Get a Scalar Value
those columns where the IsKeyField property is set to true. If you need to retrieve the value from one of the many ag-
Ensure that the number of values passed into the param- gregate functions in SQL Server, such as Count(), Sum(),
eter array are equal to the number of properties with the Avg(), etc., expose a method named ExecuteScalar() from
[Key] attribute. If these two numbers don’t match, throw an the RepositoryBase class. To retrieve the count of all records
ApplicationException object. in the Product table, submit a SQL statement such as the
following:
Loop through the collection of searchColumns and fill in
the ParameterValue property for each ColumnWrapper ob- SELECT COUNT(*) FROM SalesLT.Product;
ject in the list. Set the SearchOperator property for each to
be an equal sign because you’re looking for an exact match. Place this SQL statement into a Command object and call
the ExecuteScalar() method on the Command object. Open
Build the WHERE clause for the SELECT statement by using the RepositoryBase.cs file and add a new method. Because
the BuildSearchWhereClause() method you created earlier. you don’t know what type of object you’re going to get back,
Build the SqlCommand object and then build the parameters return an object data type.
for the WHERE clause by calling the BuildWhereClausePa-
rameters() method. public virtual object
ExecuteScalar(IDbCommand cmd) {
Call the overload of the Find() method shown in Listing 12. object ret;
This method is responsible for passing the command object
to the Search() method and retrieving the results back. Check // Open the Connection
the results to ensure values were found, and if there’s at least DbContext.CommandObject.Connection.Open();
one product in the list, assign the first item to the ret variable
to be returned from this method. If no values are found, a // Call the ExecuteScalar() method
null value is returned just like the LINQ Find() method. ret = DbContext.CommandObject.ExecuteScalar();

Now that you have the generic Find() methods written in return ret;
the RepositoryBase class, open the ProductRepository.cs }

codemag.com Simplifying ADO.NET Code in .NET 6: Part 2 27


Add an overload of the ExecuteScalar() method to allow you
pass in a simple SQL statement. This method then creates Console.WriteLine(
the Command object and passes it to the previous Execute- "*** ExecuteScalar(sql) Sample ***");
Scalar() overload for processing. // Display Result
Console.WriteLine(rows);
public virtual object Console.WriteLine();
ExecuteScalar(string sql) { Console.WriteLine(
// Store the SQL submitted $"SQL Submitted: {db.Database.SQL}");
SQL = sql; Console.WriteLine();

// Create Command object with SQL Run this application and you should see the total number
DbContext.CreateCommand(SQL); of products within the Product table appear in the console
window.
// Return the value
return ExecuteScalar(DbContext.CommandObject);
} Multiple Result Sets
Sometimes, retrieving multiple result sets can help you cut
Try It Out down the number of roundtrips to your SQL Server. A data
Open the Program.cs file and add code to test this out. reader object supports reading one result set and then ad-
vancing to the next. Let's look at how this works with the
using AdvWorksDbContext db = new(ConnectString); wrapper classes you’ve created so far.

string sql = "SELECT COUNT(*) Create New Search() Method Overload


FROM SalesLT.Product"; Open up the RepositoryBase.cs file and create a new over-
int rows = (int)db.Database.ExecuteScalar(sql); load of the Search() method, as shown in Listing 14. This
method accepts both a command object and a data read-
er, and it’s responsible for calling the BuildEntityList()
Listing 14: Add a new Search() method that takes an IDataReader object method.
public virtual List<TEntity>
Search<TEntity>(IDbCommand cmd, Modify the old Search() method to have it now call this new
IDataReader rdr) { overload, as shown in the code snippet below. Remove the
List<TEntity> ret; declaration of the ret variable, and modify the return state-
// Build Columns if needed ment to call the new overloaded Search() method.
if (Columns.Count == 0) {
Columns = BuildColumnCollection<TEntity>(); public virtual List<TEntity> Search<TEntity>
} (IDbCommand cmd) {
// Set Command Object // Build Columns if needed
DbContext.CommandObject = cmd; if (Columns.Count == 0) {
Columns = BuildColumnCollection<TEntity>();
// Get the list of entity objects
ret = BuildEntityList<TEntity>(rdr); }

return ret; // Set Command Object


} DbContext.CommandObject = cmd;

return Search<TEntity>(cmd,
Listing 15: Add a new entity class to illustrate how to get multiple result sets DbContext.CreateDataReader());
}
#nullable disable

using System.ComponentModel.DataAnnotations; Add a Customer Entity Class


using System.ComponentModel To illustrate multiple result sets, you need a new entity
.DataAnnotations.Schema; class. In the AdventureWorksLT database, there’s a Custom-
namespace AdoNetWrapperSamples.EntityClasses; er table. Let's create a new Customer.cs file and add the
code shown in Listing 15 to model that table.
[Table("Customer", Schema = "SalesLT")]
public partial class Customer Add a View Model Class
{
[Key] Instead of writing the code to handle multiple result sets
public int CustomerID { get; set; } in the Program.cs file, create a new view model class to
public string Title { get; set; } encapsulate the functionality of reading both product and
public string FirstName { get; set; }
public string MiddleName { get; set; }
customer data. Right mouse-click on the project and cre-
public string LastName { get; set; } ate a folder named ViewModelClasses. Right mouse-click
public string CompanyName { get; set; } on the ViewModelClasses folder and add a new class named
ProductCustomerViewModel.cs and add the code shown in
public override string ToString() {
return $"{LastName}, {FirstName} Listing 16.
({CustomerID})";
} The code in the LoadProductsAndCustomers() method cre-
} ates a string with two SQL statements in it. An instance of

28 Simplifying ADO.NET Code in .NET 6: Part 2 codemag.com


Listing 16: Create a class to wrap up both result sets
#nullable disable using AdvWorksDbContext db =
new(ConnectString);
using AdoNetWrapperSamples.EntityClasses;
using AdoNetWrapperSamples.Models; // Create Command object
var cmd = db.CreateCommand(sql);
namespace AdoNetWrapperSamples.ViewModelClasses;
// Get the Product Data
public class ProductCustomerViewModel { Products = db.Database.Search<Product>(cmd);
public ProductCustomerViewModel
(string connectString) { // Advance to next result set
ConnectString = connectString; db.DataReaderObject.NextResult();
}
// Clear columns to get ready
public string ConnectString { get; set; } // for next result set
public List<Product> Products { get; set; } db.Database.Columns = new();
public List<Customer> Customers { get; set; }
// Get the Customer Data
public void LoadProductsAndCustomers() { Customers = db.Database
string sql = "SELECT * .Search<Customer>(cmd, db.DataReaderObject);
FROM SalesLT.Product;"; }
sql += "SELECT * FROM SalesLT.Customer"; }

the AdvWorksDbContext class is created with a using block Listing 17: Create a stored procedure to perform searching
so all connection objects are disposed of properly. Next a CREATE PROCEDURE [SalesLT].[Product_Search]
SqlCommand object is created by calling the CreateCom- @Name nvarchar(50) null,
mand() method on the database context object. @ProductNumber nvarchar(25) null,
@BeginningCost money null,
@EndingCost money null
The Search<Product>() method is called to load the set of AS
product data. Call the NextResult() method on the data read- BEGIN
er object to move to the next result set. Clear the current list SELECT *
of ColumnWrapper objects because that list of columns is for FROM SalesLT.Product
WHERE (@Name IS NULL OR
the Product data set. Finally, call the Search<Customer>() Name LIKE @Name + '%')
method passing in the command object and the current data AND (@ProductNumber IS NULL OR
reader object, which is now ready to loop through the cus- ProductNumber LIKE @ProductNumber + '%')
tomer records. AND (@BeginningCost IS NULL OR
StandardCost >= @BeginningCost)
AND (@EndingCost IS NULL OR
Try It Out StandardCost <= @EndingCost)
To try this code out to make sure it works, open the Pro- END
gram.cs file. Put the code shown below just after the code
that retrieves the connection string.

ProductCustomerViewModel vm = new(ConnectString); in the server. Let's look at calling a stored procedure using
the ADO.NET wrapper classes. Create a stored procedure in
vm.LoadProductsAndCustomers(); the AdventureWorksLT database named Product_Search, as
shon in Listing 17.
// Display Products
foreach (var item in vm.Products) { Create Parameter Class for Calling a Stored Procedure
Console.WriteLine(item); Because the Product_Search stored procedure has four
} parameters, you should create a class with four proper-
ties. Right mouse-click on the project and add a new folder
// Display Customers named ParameterClasses. Right mouse-click on the Param-
foreach (var item in vm.Customers) { etersClasses folder and add a new class named Product-
Console.WriteLine(item); SearchParam. The property names should match the param-
} eter names within the stored procedure.

Run the application and you should see the list of products #nullable disable
and customers appear in the console window.
using AdoNetWrapper.Common;

Search for Data Using a namespace AdoNetWrapperSamples.ParameterClasses;


Stored Procedure
Another common method of retrieving data from a data- public class ProductSearchParam {
base is to call a stored procedure. If you have a three (or public string Name { get; set; }
more) table join, it’s a best practice to move that code to public string ProductNumber { get; set; }
a stored procedure or a view in your database. Keeping public decimal? BeginningCost { get; set; }
complicated queries out of your C# code is better for read- public decimal? EndingCost { get; set; }
ability and maintenance. It also allows you to tune the join }

codemag.com Simplifying ADO.NET Code in .NET 6: Part 2 29


Listing 18: Add new method to accept a SQL statement for calling a stored procedure
public virtual List<TEntity> // Build a collection of ColumnMapper objects
SearchUsingStoredProcedure<TEntity, TParam> // based on properties in the TParam object
(TParam param, string sql) { searchColumns = BuildSearchColumnCollection
List<ColumnMapper> searchColumns = new(); <TEntity, TParam>( param);
List<TEntity> ret;
// Add any Parameters?
// Store the SQL submitted if (searchColumns != null &&
SQL = sql; searchColumns.Count > 0) {
BuildWhereClauseParameters(
// Build columns collection for entity class DbContext.CommandObject, searchColumns);
Columns = BuildColumnCollection<TEntity>(); }
}
// Create Command Object with SQL
DbContext.CreateCommand(SQL); ret = BuildEntityList<TEntity>
(DbContext.CreateDataReader());
// Set CommandType to Stored Procedure
DbContext.CommandObject.CommandType = return ret;
CommandType.StoredProcedure; }

if (param!= null) {

Listing 19: Modify the BuildWhereClauseParameters() method to set a DBNull.Value When you were building the WHERE clause for a dynamic SQL
protected virtual void statement, you only needed to create ColumnWrapper ob-
BuildWhereClauseParameters(IDbCommand cmd, ject for those properties in the search class that had a value
List<ColumnMapper> whereColumns) { in them. When calling a stored procedure, you need to cre-
// Add parameters for each key value passed in ate a ColumnWrapper object for all parameters whether or
foreach (ColumnMapper item in whereColumns) {
not there is a value in them. Locate the BuildSearchColumn-
var param = DbContext.CreateParameter(
item.ColumnName, Collection() method and within the foreach() loop, modify
item.SearchOperator == "LIKE" ? the if statement that checks to see if the value is not null
item.ParameterValue + "%" : to look like the following.
item.ParameterValue);
if (value != null ||
// Add parameter value or DBNull value
param.Value ??= DBNull.Value; (DbContext.CommandObject != null &&
DbContext.CommandObject.CommandType ==
cmd.Parameters.Add(param); CommandType.StoredProcedure)) {

if (cmd.CommandType !=
CommandType.StoredProcedure) {
One more location you need to change code to support
// Store parameter info calling stored procedures is within the BuildWhereClause-
Columns.Find(c => c.ColumnName == Parameters() method. As you loop through each Column-
item.ColumnName) Wrapper object to build the parameter, you’re going either
.ParameterValue = item.ParameterValue; set the parameters' Value property to the value from the
}
} search class, or a DBNull.Value. Also change it so the Param-
} eterValue property is set back into the collection of entity
columns only if you are not calling a stored procedure. This
is because the parameter names passed to the stored pro-
cedure may not be the same names as the property names
Add Method to Call Stored Procedure in the entity column collection. Modify the BuildWhere-
Open the RepositoryBase.cs file and create a new method ClauseParameters() method to look like the code shown in
named SearchUsingStoredProcedure(), as shown in Listing Listing 19.
18. In this method, pass in an instance of the parameter
class and a SQL string that contains the name of the stored Try It Out
procedure. Assign the SQL string passed to the SQL property Open the Program.cs file and modify the code after retriev-
and build the columns collection for the entity class collec- ing the connection string to look like Listing 20. Run the
tion to be returned. console application and you should see only products with
names starting with the letter C appearing in the console
Create the command object and assign the CommandType window.
property of the command object to the enumeration Com-
mandType.StoredProcedure. Check the param parameter to Call Stored Procedure with No Parameters
ensure that it isn’t null. If not, build the collection of search If you have a stored procedure that doesn’t have any param-
columns to use to build the set of parameters that will be eters, you can call that as well. Just pass a null value as the
passed to the stored procedure. You can use the same Build- first parameter to the new Search() overload you just added.
WhereClauseParameters() method you used before, as this As an example, create the following stored procedure in the
adds parameters to the command object based on the set of AdventureWorksLT database:
ColumnWrapper objects passed to it. Finally, call the stored
procedure and use the result set to build the collection of CREATE PROCEDURE [SalesLT].[Product_GetAll]
entity objects. AS

30 Simplifying ADO.NET Code in .NET 6: Part 2 codemag.com


BEGIN Listing 20: Call a stored procedure using the SearchUsingStoredProcedure() method
SELECT *
using AdvWorksDbContext db = new(ConnectString);
FROM SalesLT.Product;
END string sql = "SalesLT.Product_Search";
ProductSearchParam param = new() {
Try It Out Name = "C"
Open the Program.cs file and modify the line of code that };
sets the name of the stored procedure to call. List<Product> list = db.Database
.SearchUsingStoredProcedure<Product,
string sql = "SalesLT.Product_GetAll"; ProductSearchParam>(param, sql);

// Display Products
Next, modify the line of code that calls the SearchUsing- foreach (var item in list) {
StoredProcedure() method. The TEntity and TParam types Console.WriteLine(item);
passed should both be the Product entity class. Pass a null }
value to the first parameter to avoid creating any param-
Console.WriteLine();
eters for this stored procedure call. Console.WriteLine(
$"Total Items: {list.Count}");
List<Product> list = db.Database Console.WriteLine();
.SearchUsingStoredProcedure Console.WriteLine(
$"SQL Submitted: {db.Database.SQL}");
<Product, Product>(null, sql);

Run the console application and you should see all of the
product data displayed after making this call to the stored public int Size { get; set; } Getting the Sample Code
procedure.
You can download the
public OutputParamAttribute(
sample code for this
ParameterDirection direction) {
Stored Procedure with Direction = direction;
article by visiting
www.CODEMag.com
Output Parameter } under the issue and article,
Stored procedures can not only have input parameters, but } or by visiting www.pdsa.
output parameters as well. To retrieve the value from an com/downloads. Select
OUTPUT parameter, you need to ensure that you read the The OutputParamAttribute class inherits from the Attribute “Articles” from the Category
parameter immediately after calling the stored procedure. class and exposes three public properties. The Direction drop-down. Then select “
If you’re reading data using a data reader, you need to close property is the one exposed from the constructor, as that’s Simplifying ADO.NET Code in
the reader, but NOT close the connection. To test this, cre- the one you’re going to use the most. .NET 6: Part 2” from the Item
ate the following stored procedure in the AdventureWorksLT drop-down.
database: Create Search Class with OutputParam Attribute
Any time you have a stored procedure with parameters, you
CREATE PROCEDURE need to build a parameter class to map to those parameters.
[SalesLT].[Product_GetAllWithOutput] Right mouse-click on the ParameterClasses folder, create a new
@Result nvarchar(10) OUTPUT class named ProductGetAllParam, and enter the code shown
AS below into this new file. Notice that the Result property is deco-
BEGIN rated with the new [OutputParam] attribute you just created.
SELECT *
FROM SalesLT.Product; #nullable disable

/* Set the output parameter */ using AdoNetWrapper.Common;


SELECT @Result = 'Success'; using System.Data;
END
namespace AdoNetWrapperSamples.ParameterClasses;
Create [OutputParam] Attribute
You need to inform the RepositoryBase class if you’re going public class ProductGetAllParam {
to have an OUTPUT parameter that needs to be returned. [OutputParam(ParameterDirection.Output,
An easy way to do this is to create another attribute. Right Size = 10)]
mouse-click on the Common folder, create a new class public string Result { get; set; }
named OutputParamAttribute, and enter the code shown }
below in this new file.
Modify ColumnMapper Class
#nullable disable Because you now have additional properties within the
using System.Data; [OutputParam] attribute, you need to add these same prop-
namespace AdoNetWrapper.Common; erties to the ColumnMapper class. As you iterate over the
properties for a search class, you can store the data from
[AttributeUsage(AttributeTargets.Property)] the [OutputParam] attribute into the ColumnMapper object
public class OutputParamAttribute:Attribute { for use when calling the stored procedure. Open the Colum-
public ParameterDirection Direction nMapper.cs file and add a Using statement.
{ get; set; }
public DbType DbType { get; set; } using System.Data;

codemag.com Simplifying ADO.NET Code in .NET 6: Part 2 31


Add the following new properties to the ColumnWrapper class. Param] attribute. If one is found, transfer the properties
found in the OutputParam into the ColumnWrapper object.
public ParameterDirection Direction Within the foreach loop, after the code that checks for a
{ get; set; } [Search] attribute, add the following code to check for an
public DbType DbType { get; set; } [OutputParam] attribute.
public int Size { get; set; }
// Does Property have an [OutputParam] attribute
Add a constructor to the ColumnMapper class to set the de- OutputParamAttribute oa = Prop
fault parameter direction to Input. Also take this oppor- .GetCustomAttribute<OutputParamAttribute>();
tunity to initialize the SearchOperator the equal sign (=). if (oa != null) {
colMap.Direction = oa.Direction;
public ColumnMapper() { colMap.DbType = oa.DbType;
SearchOperator = "="; colMap.Size = oa.Size;
Direction = ParameterDirection.Input; }
}

Modify the BuildSearchWhereClause() Method


Modify the BuildSearchColumnCollection() Method
Now locate the BuildSearchWhereClause() method and mod-
Open the RepositoryBase.cs file and modify the Build-
ify the code in the foreach() to only retrieve those columns
SearchColumnCollection() method to check for an [Output
where the Direction property is either Input or InputOut-
put. Those properties that have a Direction set to Output
Listing 21: Create a new method to get the output parameter values don’t need to be included in the WHERE clause.

protected virtual void GetOutputParameters foreach (var item in columns


<TParam>(TParam param,
List<ColumnMapper> columns) { .Where(c => c.Direction ==
// Get output parameters ParameterDirection.Input
foreach (ColumnMapper item in columns || c.Direction ==
.Where(c => c.Direction == ParameterDirection.InputOutput)) {
ParameterDirection.Output ||
c.Direction ==
ParameterDirection.InputOutput)) { Modify the BuildWhereClauseParameters() Method
// Get the output parameter
var outParam = DbContext Find the BuildWhereClauseParameters() method and modify
.GetParameter(item.ColumnName); the foreach() to only retrieve those columns where the Di-
// Set the value on the parameter object rection property is either Input or InputOutput.
typeof(TParam).GetProperty(item.ColumnName)
.SetValue(param, outParam.Value, null);
} foreach (ColumnMapper item in whereColumns
} .Where(c => c.Direction ==
ParameterDirection.Input
|| c.Direction ==
ParameterDirection.InputOutput)) {
Listing 22: Create a SqlServerRepositoryBase class to override those methods that have
SQL Server specific functionality
#nullable disable
Add a BuildOutputParameters() Method
For working with stored procedure OUTPUT parameters,
using System.Data; build a new method to handle those columns in the search
using System.Data.SqlClient; class that are decorated with the [OutputParam] attribute.
namespace AdoNetWrapper.Common;
Create a new method named BuildOutputParameters that
accepts a Command object and a list of columns from the
public class SqlServerRepositoryBase search class. In the foreach() iterator, you’re only going
: RepositoryBase { to extract those columns where the Direction property is
public SqlServerRepositoryBase(
SqlServerDatabaseContext context)
either Output or InputOutput.
: base(context) { }
protected virtual void BuildOutputParameters
protected override void (IDbCommand cmd, List<ColumnMapper> columns) {
BuildOutputParameters(IDbCommand cmd,
// Add output parameters
List<ColumnMapper> columns) {
// Add output parameters foreach (ColumnMapper item in columns
foreach (ColumnMapper item in columns .Where(c => c.Direction ==
.Where(c => c.Direction == ParameterDirection.Output ||
ParameterDirection.Output)) {
var param = (SqlParameter)DbContext c.Direction ==
.CreateParameter(item.ColumnName, null); ParameterDirection.InputOutput)) {
param.Direction = item.Direction; var param = DbContext.CreateParameter(
param.DbType = item.DbType; item.ColumnName, null);
// Need to set the Size for SQL Server
param.Size = item.Size; param.Direction = item.Direction;
cmd.Parameters.Add(param); param.DbType = item.DbType;
} cmd.Parameters.Add(param);
} }
}
}

32 Simplifying ADO.NET Code in .NET 6: Part 2 codemag.com


Listing 23: Modify the SearchUsingStoredProcedure() method to build output parameters
if (param != null) {
// Build collection of ColumnMapper objects // Add any Output Parameters?
// based on properties in the TParam object if (searchColumns.Where(c => c.Direction ==
searchColumns = BuildSearchColumnCollection ParameterDirection.Output ||
<TEntity, TParam>(param); c.Direction ==
ParameterDirection.InputOutput).Any()) {
// Add any Parameters? BuildOutputParameters(DbContext.CommandObject,
if (searchColumns != null && searchColumns);
searchColumns.Count > 0) { }
BuildWhereClauseParameters(DbContext }
.CommandObject, searchColumns);
}

Add GetOutputParameters() Method Open the Program.cs file and modify the code to look like SPONSORED SIDEBAR:
After the stored procedure has been processed is when you the following.
may retrieve any OUTPUT parameters. Create a new method Ready to Modernize
named GetOutputParameters() (shown in Listing 21) to it- string sql = "SalesLT.Product_GetAllWithOutput"; a Legacy App?
erate over the search columns and retrieve the value from ProductGetAllParam param = new() {
the stored procedure and place it into the appropriate prop- Result = "" Need FREE advice on
erty of the search class. };
migrating yesterday’s
legacy applications to
today’s modern platforms?
Create SqlServerRespositoryBase Class List<Product> list = db.Database
Get answers by taking
When using SQL Server to retrieve OUTPUT parameters, you .SearchUsingStoredProcedure<Product,
advantage of CODE
must set the Size property when adding the parameter to ProductGetAllParam>(param, sql);
Consulting’s years of
the Command object. This might not be true for all .NET data experience by contacting
providers, but you need it for SQL Server. Unfortunately, the Add the following code after the loop displaying all the us today to schedule
Size parameter does not exist on the IDbCommand inter- items returned. your free hour of
face, so you must create a SqlServerRepositoryBase class CODE consulting call.
that inherits from the RepositoryBase class and override the Console.WriteLine(); No strings. No
BuildOutputParameters() method. Within this override, you Console.WriteLine($"Output Param: commitment. Nothing to
set the Size property on the parameter object. Right mouse- '{param.Result}'"); buy. For more information.
click on the Common folder and add a new class named visit www.codemag.com/
SqlServerRepositoryBase. Place the code shown in Listing Run the console application and you should see the OUTPUT consulting or email us at
22 into this new file. parameter named Result appear after all the products have info@codemag.com.
been displayed.
Modify SearchUsingStoredProcedure() Method
Open the RepositoryBase.cs file and locate the SearchUs-
ingStoredProcedure() method. Within the If statement Summary
(Listing 23) that checks that the param variable is not This article built more functionality into the wrapper class-
null, add a new If statement immediately after the existing es around ADO.NET to give you the ability to add WHERE
If statement. clauses to SELECT statements. In addition, you saw how to
retrieve data from views and stored procedures. Multiple re-
Move a little further down in this method and, just after the sult sets can be handled, and you can now retrieve scalar
call to the BuildEntityList() method and before the return values. The best thing is that most of the code is going into
statement, add the following code to retrieve any output generic classes, so as you add more classes to work with
parameters: more tables, the code you write for each of those is minimal.

// Retrieve Any Output Parameters In the next article, you’ll learn to insert, update, and delete
if (searchColumns.Where(c => c.Direction == data. You will also learn to submit transactions, validate
ParameterDirection.Output || data using data annotations, and to handle exceptions.
c.Direction ==
ParameterDirection.InputOutput).Any()) {  Paul D. Sheriff
// Must close DataReader for output 
// parameters to be available
DbContext.DataReaderObject.Close();

GetOutputParameters(param, searchColumns);
}

Try It Out
Open the AdvWorksDbContext.cs file and modify the Data-
base property to use the new SqlServerRepositoryBase class.

public SqlServerRepositoryBase Database


{ get; set; }

codemag.com Simplifying ADO.NET Code in .NET 6: Part 2 33


ONLINE QUICK ID 2209051

Customized Object-Oriented and


Client-Server Scripting in C#
In this article, I’m going to talk about using a custom object-oriented scripting in C#. By “custom,” I mean that all you’re going
to see here is available to use and modify from GitHub. By “C#,” I mean that the scripting language is implemented in C#
and you can just include it in your project in order to adjust it as you wish. As a scripting language, I’m going to use CSCS

(customized scripting in C#). I’ve talked about this lan- Another example is a Xamarin iOS—Android mobile project
guage in a few previous CODE Magazine articles (see links that can be downloaded from here: https://github.com/
in the sidebar). CSCS is an open-source scripting language vassilych/mobile.
that's very easy to integrate into your C# project.
CSCS is a functional language, syntactically very similar to
You’re going to see how to use classes and objects in script- JavaScript. To add a new functionality to CSCS, you’ll need
ing, and also how they’re implemented in C#. It’s important to perform just these three steps:
that you have a full control of how the object-oriented func-
tionality is implemented. For instance, you can have mul- 1. Define a CSCS function name as a constant. When pars-
Vassili Kaplan tiple inheritance in scripting, which is forbidden in C# or ing this constant, the CSCS parser triggers the appro-
VassiliK@gmail.com in JavaScript. But you could also disable it if you think that priate implementation code.
it’s against your beliefs. It’s important that you, and not 2. Implement a new class, deriving from the ParserFunc-
Vassili is a former Microsoft another architect, decide what features you want to have to tion class. The most important method is Evaluate().
Lync developer. He’s been solve a particular problem. It will be triggered when the constant defined in the
studying and working in previous step is parsed.
a few countries, such as 3. Register the newly created class with the parser.
Russia, Mexico, the USA,
and Switzerland. Let’s see how this is done using the implementation of the
The great thing about object- power function .
He has a Masters in oriented code is that it can
Applied Mathematics with
Specialization in Computa- make small, simple problems First, you define an appropriate constant in the Constants.cs file:
tional Sciences from Purdue look like large, complex ones. public const string MATH_POW = ”Math.Pow”;
University, West Lafayette,
Indiana and a Bachelor in  Anonymous
Applied Mathematics from Next, you define the implementation:
ITAM, Mexico City.
class PowFunction : ParserFunction {
In his spare time, Vassili protected override Variable Evaluate(
As an example of using object-oriented scripting, I’m going
works on the CSCS scripting ParsingScript script) {
to take a look at a client-server application, where I’ll show
language. His other hobbies List<Variable> args = script.
how you can send and receive objects. I’ll also show a simple
are traveling, biking, GetFunctionArgs();
marshalling-unmarshalling mechanism (converting objects
badminton, and enjoying a Utils.CheckArgs(args.Count, 2, m_name, true);
to a string and back) to pass data across the wire. You can
glass of a good red wine. Variable arg1 = args[0];
use a similar approach for any custom client-server com- Variable arg2 = args[1];
You can contact him munication, just using a couple of lines of a scripting code.
through his website: arg1.Value = Math.Pow(arg1.Value,
http://www.iLanguage.ch To distinguish between the C# code and CSCS scripting code, arg2.Value);
or e-mail: vassilik@gmail.com all C# code is provided below with the syntax highlighting, return arg1;
whereas all scripting code doesn’t use it. }
public override string Description() {
Let’s start by looking at how you can set up scripting in your return "Returns a specified number \
.NET Visual Studio project. raised to the specified power.";
}
Setting Up CSCS Scripting }

One of the simplest ways to start using CSCS scripting is to Finally, the last step is registering this new functionality
download the source code from GitHub (see https://github. with the parser at the program initialization stage:
com/vassilych/cscs) and add the source code directly to
your C# .NET project. The license lets you modify and use ParserFunction.RegisterFunction(
the code without any restrictions. Constants.MATH_POW, new PowFunction());

An example of including the CSCS Scripting Engine in a Win- You’re done now. As soon as the parser sees something like
dows GUI project is a WPF project, available here: Math.Pow(2, 5), the Evaluate() method above is triggered
https://github.com/vassilych/cscs_wpf. and the correct value of 32 calculated.

34 Customized Object-Oriented and Client-Server Scripting in C# codemag.com


The Description method is triggered when the user calls a }
Help scripting method. }

Note the convenient method script.GetFunctionArgs(). It You can now create new objects and use these classes as
returns all comma-separated arguments between the paren- usual:
theses (e.g., it returns 2 and 5 for Math.Pow(2, 5)). You
can also put some variables and arrays as function argu- obj1 = new Stuff1(10);
ments—their value will be recursively extracted during the obj2 = new Stuff2(5);
GetFunctionArgs() call. print(obj1.X + obj2.Y); // prints 15.
print(obj1); // prints stuff1.obj1[x=10]

Now let’s use multiple inheritance, something you can’t do


Object-oriented programming in many modern languages. Let’s define a class that inherits
had boldly promised "to model both the method implementations and variables from the
base classes:
the world." Well, the world is a
scary place where bad things class CoolStuff : Stuff1, Stuff2 {
z = 3;
happen for no apparent reason, CoolStuff(a=1, b=2, c=3) {
and in this narrow sense I concede x = a;
y = b;
that OO does model the world.
z = c;
 Dave Fancher }
function addCoolStuff() {
return x + addStuff2(z);
}
In the next sections, I’m going to show how you can use function ToString() {
the new function definition shown in this section to define return "{" + x + "," + y + "," + z + "}";
classes and objects. }
}

“Hello, World!” in Object-Oriented Here’s how you can use this newly defined class:
Scripting
Let’s first see how classes and objects are defined and used obj3 = new CoolStuff(11, 22, 33);
in scripting and then how they are implemented in C#. obj3.HelloWorld(); // prints “Hello, World!”
print(obj3.AddStuff2(20)); // prints 42
I hope you’ll find this very intuitive and similar to other lan- print(obj3); // prints {11,22,33}
guages, with some few exceptions (like multiple inheritance).
As you can see, both variables and methods can be used
from the base classes. A special method is ToString(). When
defined, it overrides the string representation of the ob-
With enough practice, ject (e.g., what’s printed in print(object) statement). The
any interface is intuitive. default ToString() implementation is the following: Class-
Name.InstanceName[variable,variable2,…].
 Anonymous
You probably noted that some of the class methods start
with a lowercase letter, others with an uppercase: it doesn’t
Let’s see two simple examples of a class definition in CSCS: matter, CSCS scripting language is case insensitive.

class Stuff1 {
x = 2;
Stuff1(a) { CSCS scripting language
x = a;
is case-insensitive.
}
function helloWorld() {
print("Hello, World!");
} You can also debug a CSCS script. The easiest method is
} to install the CSCS Debugger and REPL Extension for Vi-
sual Studio Code (https://marketplace.visualstudio.com/
class Stuff2 { items?itemName=vassilik.cscs-debugger). This CODE
y = 3; Magazine article explains how to use Visual Studio Code
Stuff2(b) { Extensions for debugging: https://www.codemag.com/ar-
y = b; ticle/1809051.
}
function addStuff2(n) { Figure 1 shows a debugging session with some CSCS script-
return n + y; ing statements.

codemag.com Customized Object-Oriented and Client-Server Scripting in C# 35


Figure 1: Debugging CSCS Scripting with Visual Studio Code on macOS

Implementing Scripting Classes ParserFunction.RegisterFunction(Constants.NEW,


and Objects in C# new NewObjectFunction());
// NEW is defined as “new”
Let’s see briefly how the classes and objects scripting func-
tionality from the previous section is implemented in C#. As I encourage you to take a look at the CSCS GitHub page
you previously saw with the Math.Pow() example, all of the (https://github.com/vassilych/cscs) for more implementa-
CSCS functionality is implemented as functions. Yes, even tion details.
classes are implemented this way, no matter how strange
it sounds. Next, let’s see an example of using scripting to access Web
Services.
When the CSCS parser reads a class definition, that starts
with Class ClassName …, the C# implementation is trig-
gered (see Listing 1).
Accessing Web Services
from Scripting
The code in Listing 1 defines a new class, which can now be As an example of accessing a Web Service, you’re going to use
instantiated in CSCS. This also needs to be registered with Alpha Vantage Web Service (https://www.alphavantage.co).
the parser before being used: Alpha Vantage provides a financial market data API.

ParserFunction.RegisterFunction(Constants.CLASS, The main advantages of using Alpha Vantage are that it’s
new ClassCreator()); pretty straightforward to create a request and that it’s also
// CLASS is defined as “class” free to use (well, up to five requests per minute or 500 re-
quests per day, as of this writing). To replicate what you’re
As soon as the CSCS parser sees this statement, obj1 = new doing here, you need to request a free API key here: https://
Stuff1…, another C# implementation is triggered, namely www.alphavantage.co/support/#api-key.
the Evaluate() method of the NewObjectFunction class (see
Listing 2). The NewObjectFunction must also be registered Here is how you create a URL to access their Web Service
with the CSCS parser as follows: in CSCS:

36 Customized Object-Oriented and Client-Server Scripting in C# codemag.com


Listing 1: C# Code to create a scripting class
public class ClassCreator : ParserFunction Constants.START_GROUP, Constants.END_GROUP);
{ string body = Utils.ConvertToScript(Utils.GetBodyBetween(
protected override Variable Evaluate(ParsingScript script) script, Constants.START_GROUP, Constants.END_GROUP, out _);
{
string className = Utils.GetToken(script); ParsingScript tempScript = script.GetTempScript(body);
string[] baseClasses = Utils.GetBaseClasses(script); tempScript.CurrentClass = newClass;
var newClass = new CSCSClass(className, baseClasses); tempScript.DisableBreakpoints = true;
var result = tempScript.ExecuteScript();
newClass.ParentOffset = script.Pointer; return result;
newClass.ParentScript = script; }
string scriptExpr = Utils.GetBodyBetween(script, }

Listing 2: C# code for the new object implementation


public class NewObjectFunction : ParsingFunction return new Variable(obj);
{ }
protected override Variable Evaluate(ParsingScript script) var instance = new CSCSClass.ClassInstance(
{ script.CurrentAssign, className, args, script);
string className = Utils.GetToken(script);
className = Constants.ConvertName(className); var newObject = new Variable(instance);
List<Variable> args = script.GetFunctionArgs(); newObject.ParamName = instance.InstanceName;
return newObject;
var csClass = CSCSClass.GetClass(className) as CompiledClass; }
if (csClass != null) { }
ScriptObject obj = csClass.GetImplementation(args);

Listing 3: JSON string returned from the Alpha Vantage Web Service
{ "5. volume": "24960766"
"Meta Data": { },
"1. Information": "Daily Prices (open, high, low, close) "2022-05-25": {
and Volumes", "1. open": "258.1400",
"2. Symbol": "MSFT", "2. high": "264.5800",
"3. Last Refreshed": "2022-05-26 16:00:01", "3. low": "257.1250",
"4. Output Size": "Compact", "4. close": "262.5200",
"5. Time Zone": "US/Eastern" "5. volume": "28547947"
}, },
"Time Series (Daily)": { ...
"2022-05-26": { }
"1. open": "262.2700", };
"2. high": "267.0000",
"3. low": "261.4300",
"4. close": "265.9000",

baseURL = "https://www.alphavantage.co/" + • The Tracking variable is needed for multiple requests.


"query?function=TIME_SERIES_DAILY&symbol="; When you get a response back, the Tracking variable
apikey = "Y12T0TY5EUS6BXXX"; associates it with the right request.
symbol = "MSFT"; • OnSuccess and OnFailure are CSCS callback methods
stockUrl = baseURL + symbol + "&apikey=" + triggered when the response is received.
apikey; • The content type by default is application/x-www-
form-urlencoded.
As a result, you’ll get a JSON file (see an example in List- • You can also send some headers with the request. This
ing 3). is useful for the REST API requests.

This is the function to create a Web Request using CSCS All parameters, except Request and URL, are optional. If the
scripting: OnSuccess and OnFailure callback methods aren’t supplied,
the request is executed synchronously and the result of the
WebRequest(Request, Url, Load, Tracking, request is returned from the WebRequest method.
OnSuccess, OnFailure, ContentType, Headers);
An example of accessing the Alpha Vantage Web Service is
Here is what these parameters mean: the following:

• Request is one of the standards GET, POST, PUT, etc. result = WebRequest("GET", stockUrl, "", symbol);
For Alpha Vantage, you need GET.
• The Web service URL, as defined above. The result of this call is shown in Listing 3 for the Microsoft
• The load is some additional data to send. stock. To be able to use the returned JSON string, there’s an

codemag.com Customized Object-Oriented and Client-Server Scripting in C# 37


auxiliary CSCS GetVariableFromJSON() function. After ap- MSFT 2022-05-26 16:00:01. Open: 262.27,
plying this function, the main parts of the JSON string are Close: 265.9: Low: 261.43, High: 267,
split into a list and their subparts can be accessed by a key. Volume: 24960766
Here is how you can access the resulting string (see Listing
3 for details): Before getting into the main example of this article, the
Client-Server communication, let’s take a look at marshal-
function processResponse(text) ling and unmarshalling objects in CSCS scripting.
{
if (text.contains("Error")) {
return text;
Marshalling and Unmarshalling
} Objects
jsonFromText = GetVariableFromJSON(text); Using CSCS scripting, you can convert any object or variable
metaData = jsonFromText[0]; to a string and back using these methods: Marshal(object)
result = jsonFromText[1]; and Unmarshal(string).
symbol = metaData["2. Symbol"];
last = metaData["3. Last Refreshed"]; The converted string looks like a simplified XML, but it’s not
allDates = result.keys; XML. You can tweak the C# implementation code a bit if you
dateData = result[allDates[0]]; want it to be a legal XML string.
myStock = new Stock(symbol, last, dateData);
return myStock;
}
The only place where you should
The processResponse() function returns a Stock object. Its really use XML is your resume.
class definition is shown below. The main work of process-
ing the results is in the Stock class constructor. Here is the  Anonymous
Stock class definition:

class Stock {
symbol = ""; Here’s an example of marshalling a Stock object from the
date = ""; previous section:
open = 0;
low = 0; mystock = processResponse(r);
high = 0; ms = Marshal(mystock);
close = 0; // Returns:
volume = 0; // <mystock:class:stock><symbol:STR:"MSFT">
Stock(symb, dt, data) { // <date:STR:"2022-05-27"><open:NUM:268.48>
symbol = symb; // <low:NUM:267.56><high:NUM:273.34>
date = dt; // <close:NUM:273.24><volume:STR:"26910806">
open = Math.Round(data["1. open"], 2);
high = Math.Round(data["2. high"], 2); ms.type; // Returns STRING
low = Math.Round(data["3. low"], 2);
close = Math.Round(data["4. close"],2); Here’s how you construct an object back from a string:
volume = data["5. volume"];
} ums = Unmarshal(ms);
} ums.type; // Returns
// SplitAndMerge.CSCSClass+ClassInstance: Stock
Additionally, you can define a custom function for convert-
ing the Stock object into a string. An example of such a You can also marshal and unmarshal any other data struc-
function is the following (this method should be added in- tures:
side of the Stock class definition):
str = "a string";
function ToString() mstr = Marshal(str);
{ // Returns: <str:STR:"a string">
return symbol +" "+ date + ". Open: " + open + umstr = Unmarshal(mstr);
", Close: " + close + ": Low: " + low + int = 13;
", High: " + high + ", Volume: " + Volume; mint = Marshal(int); // Returns: <int:NUM:13>
} umint = Unmarshal(mint);

Now add the following CSCS code: The marshalling and unmarshalling is done recursively.
Here’s an example of an array (which is also a map for some
result = WebRequest("GET", stockUrl, "", symbol); elements) where one of the elements of the original array
stock = processResponse(result); is an array itself (note that in general, that the data in an
print(stock); array doesn’t have to be of the same type):

This prints the returned Stock object according to the To- a[0]=10;
String() method defined: a[1]="blah";

38 Customized Object-Oriented and Client-Server Scripting in C# codemag.com


a[2]=[9, 8, 7]; On the scripting client-side, the connecting code looks like References
a["x:lol"]=12; this:
a["y"]=13; Developing Cross-Platform
ma = marshal(a); response = connectsrv(request, load, port, Native Apps with a Functional
maa = unmarshal(ma); host = "localhost"); Scripting Language:
// Returns: https://www.codemag.com/
// ["x:lol":10, "y":11, 10, "blah", [9, 8, 7]] (If the server host isn’t supplied, the local host is used for Article/1711081
maa.type; // Returns ARRAY connections). Let’s see an example of accessing the server Using a Scripting Language to
defined above: Develop Native Windows WPF
Now you’re ready for the main example of this article: send- GUI Apps:
ing and receiving objects between a server and a client, all response = connectsrv("stock", "MSFT", 12345); https://www.codemag.com/
implemented in scripting. print(response.Symbol + ": Close: " + Article/2008081
response.Close + ", Volume: " +
response.Volume); Prototyping with Microsoft
A Client-Server Example // MSFT: Close: 273.24, Volume: 26910806 Maquette: A New Virtual
The client server example encompasses what you’ve seen Reality Tool:
before: a Web Server client, marshalling and unmarshalling As you can see, the resulting object is returned directly from https://www.codemag.com/
objects, and processing JSON strings. the connectsrv() call because all of the marshalling and un- Article/2009071
marshalling is done by the scripting framework.
Using CSCS Scripting
Sample server code does the following for each connected
Language for Cross-Platform
client: in case the request is equal to stock, the server in-
terprets the load parameter as the stock name (e.g., MSFT) Wrapping Up Development:
https://www.
and then sends a stock request to the Alpha Vantage Web The main advantages of using a scripting module inside of smashingmagazine.
Service that I discussed earlier. After receiving the data, the your projects are: com/2020/01/cscs-scripting-
server sends back the Stock object containing all the stock language-cross-platform-
data fields. • You’ll save time when writing code because most of development
the code is usually much shorter than it would’ve been
To start a server, you just need to call a startsrv() scripting for making the same functionality in C#. This is what CSCS Scripting GitHub:
function, supplying as arguments a function to be triggered you saw with the Client-Server example. https://github.com/vassilych/cscs
on each client connection and a port where the server is go- • You can use any features not available directly in C#
ing to listen for the incoming requests. With each request, (e.g., multiple inheritance). CSCS Debugger & REPL:
the server expects the request name and a load object • You can modify scripting code on the fly without the https://marketplace.
visualstudio.com/
(which can be an array of arguments). necessity of recompiling and restarting the service.
items?itemName=vassilik.
cscs-debugger
Here’s the scripting server-side code: I’m looking forward to your feedback, especially how you
use CSCS scripting in your projects, what Web Services you CSCS Language eBook:
counter = 0; access, and any performance tricks you’re using. https://www.syncfusion.com/
function serverFunc(request, obj) { ebooks/implementing-
counter++;  Vassili Kaplan a-custom-language
if (request == "stock") { 
stockUrl = baseURL + obj + "&apikey=" + Writing Native Mobile Apps
apikey; in a Functional Language
print(counter + ", request: " + stockUrl); Succinctly eBook:
data = WebRequest("GET", stockUrl, "", https://www.syncfusion.
symbol); com/ebooks/writing_native_
result = processResponse(data);
mobile_apps_in_a_functional_
language_succinctly
return result;
}
}

startsrv("serverFunc", 12345);

You can update the server


scripting code on the fly
without restarting the server.

Note that you can change the server scripting method to


be executed on each client connection on the fly without
restarting the server. You can just update and redefine the
serverFunc method (e.g., by using the VS Code CSCS REPL
extension, mentioned earlier).

codemag.com Customized Object-Oriented and Client-Server Scripting in C# 39


ONLINE QUICK ID 2209061

Benchmarking .NET 6 Applications


Using BenchmarkDotNet: A Deep Dive
The benchmarking technique helps determine the performance measurements of one or more pieces of code in your
application. You can take advantage of benchmarking to determine the areas in your source code that need to be optimized.
In this article, I’ll examine what benchmarking is, why benchmarking is essential, and how to benchmark .NET code using

BenchmarkDotNet. If you’re to work with the code examples


discussed in this article, you need the following installed in
What’s Baselining?
your system: Why Is It Important?
You can also mark a benchmark method as a baseline meth-
• Visual Studio 2022 od and take advantage of baselining to scale your results.
• .NET 6.0 When you decorate a benchmark method with the Baseline
• ASP.NET 6.0 Runtime attribute and set it to "true," the summary report generated
• BenchmarkDotNet after the benchmark shows an additional column named
"Ratio.” This column has the value 1.00 for a benchmark
Joydip Kanjilal If you don’t already have Visual Studio 2022 installed on method that has been baselined. All other columns will have
joydipkanjilal@yahoo.com your computer, you can download it from here: https://vi- a value relative to the Ratio column's value.
sualstudio.microsoft.com/downloads/.
Joydip Kanjilal is an MVP
(2007-2012), software Benchmarking Application
architect, author, and What’s a Benchmark? Performance in .NET 6
speaker with more than
A benchmark is a simple test that provides a set of quan- It’s time for some measurements. Let’s now examine how to
20 years of experience.
tifiable results that can help you determine whether an benchmark the performance of .NET 6 applications. You’ll
He has more than 16 years
update to your code has increased, decreased, or had no create two applications: a console application for writing
of experience in Microsoft
.NET and its related effect on performance. It’s necessary to comprehend the and executing benchmarks and an ASP.NET 6 app for build-
technologies. Joydip has performance metrics of your application's methods to le- ing an API that will be benchmarked later.
authored eight books, verage them throughout the code optimization process. A
more than 500 articles, benchmark may have a broad scope or it can be a micro- Create a New Console Application Project
and has reviewed more benchmark that evaluates minor changes to the source in Visual Studio 2022
than a dozen books. code. Let’s create a console application project that you’ll use for
benchmarking performance. You can create a project in Vi-
Why You Should Benchmark Code sual Studio 2022 in several ways. When you launch Visual
Benchmarking involves comparing the performance of code Studio 2022, you'll see the Start window. You can choose
snippets, often against a predefined baseline. It’s a process Continue without code to launch the main screen of the
used to quantify the performance improvement or degrada- Visual Studio 2022 IDE.
tion of an application's code rewrite or refactor. In other
words, benchmarking code is critical for knowing the perfor- To create a new Console Application Project in Visual Studio
mance metrics of your application's methods. Benchmarking 2022:
also allows you to zero in on the parts of the application's
code that need reworking. 1. Start the Visual Studio 2022 IDE.
2. In the Create a new project window, select Console
There are several reasons to benchmark applications. First, App, and click Next to move on.
benchmarking can help to identify bottlenecks in an appli- 3. Specify the project name as BenchmarkingConsoleDe-
cation's performance. By identifying the bottlenecks, you mo and the path where it should be created in the Con-
can determine the changes required in your source code to figure your new project window.
improve the performance and scalability of the application. 4. If you want the solution file and project to be created in
the same directory, you can optionally check the Place
solution and project in the same directory checkbox.
Introducing BenchmarkDotNet Click Next to move on.
BenchmarkDotNet is an open-source library compatible with 5. In the next screen, specify the target framework you
both .NET Framework and .NET Core applications that can would like to use for your console application.
convert your .NET methods into benchmarks, monitor those 6. Click Create to complete the process.
methods, and get insights into the performance data col-
lected. BenchmarkDotNet can quickly transform your meth- You’ll use this application in the subsequent sections of this
ods into benchmarks, run those benchmarks and obtain the article.
results of the benchmarking process. In the BenchmarkDot-
Net terminology, an operation refers to executing a method Install NuGet Package(s)
decorated with the Benchmark attribute. A collection of So far so good. The next step is to install the necessary
such operations is known as an iteration. NuGet Package(s). To install the required packages into your

40 Benchmarking .NET 6 Applications Using BenchmarkDotNet: A Deep Dive codemag.com


project, right-click on the solution and the select Manage {
NuGet Packages for Solution.... Now search for the package //Code removed for brevity
named BenchmarkDotNet in the search box and install it. }
Alternatively, you can type the commands shown below at
the NuGet Package Manager Command Prompt: Setup and Cleanup
You might want to execute some code just once and you
PM> Install-Package BenchmarkDotNet don't want to benchmark the code. As an example, you
might want to initialize your database connection or create
Create a Benchmarking Class an HttpClient instance to be used by other methods deco-
To create and execute benchmarks: rated with the [Benchmark] attribute.

1. Create a Console application project in Visual Studio 2022. BenchmarkDotNet comes with a few attributes that can help
2. Add the BenchmarkDotNet NuGet package to the project. you accomplish this. These attributes are [GlobalSetup],
3. Create a class having one or more methods decorated [GlobalCleanup], [IterationSetup], and [IterationCleanup].
with the Benchmark attribute.
4. Run your benchmark project in Release mode using the You can take advantage of the GlobalSetup attribute to ini-
Run method of the BenchmarkRunner class. tialize an HttpClient instance, as shown in the code snippet
given below:
A typical benchmark class contains one or more methods
marked or decorated with the Benchmark attribute and, private static HttpClient _httpClient;
optionally, a method that’s decorated with the GlobalSetup
attribute, as shown in the code snippet given below: [GlobalSetup]
public void GlobalSetup()
public class MyBenchmarkDemo {
{ var factory = new
[GlobalSetup] WebApplicationFactory<Startup>()
public void GlobalSetup() .WithWebHostBuilder(configuration =>
{ {
//Write your initialization code here configuration.ConfigureLogging
} (logging =>
{
[Benchmark] logging.ClearProviders();
public void MyFirstBenchmarkMethod() });
{ });
//Write your code here
} _httpClient = factory.CreateClient();
}
[Benchmark]
public void MySecondBenchmarkMethod() Similarly, you can take advantage of the GlobalCleanup at-
{ tribute to write your cleanup logic, as shown in the code
//Write your code here snippet below:
}
} [GlobalCleanup]
public void GlobalCleanup()
In BenchmarkDotNet, diagnosers are attached to the bench- {
marks to provide more useful information. The MemoryDi- //Write your cleanup logic here
agnoser is a diagnoser that, when attached to your bench- }
marks, provides additional information, such as the allocat-
ed bytes and the frequency of garbage collection. Benchmarking LINQ Performance
Let’s now examine how to benchmark LINQ methods. Cre-
ate a new class named BenchmarkLINQPerformance in a file
having the same name with the code shown in Listing 1.
Note that BenchmarkDotNet This is a simple class that benchmarks the performance of
works only with Console the Single and First methods of LINQ. Now that the bench-
mark class is ready, examine how to run the benchmark us-
applications. It won’t support ing BenchmarkRunner in the next section.
ASP.NET 6 or any other
Execute the Benchmarks
application types. As of this writing, you can use BenchmarkDotNet in a con-
sole application only. You can run benchmark on a specific
type or configure it to run on a specific assembly. The fol-
Here's how your benchmark class looks once you've added lowing code snippet illustrates how you can trigger a bench-
the MemoryDiagnoser attribute: mark on all types in the specified assembly:

[MemoryDiagnoser] var summary = BenchmarkRunner.Run


public class MyBenchmarkDemo (typeof(Program).Assembly);

codemag.com Benchmarking .NET 6 Applications Using BenchmarkDotNet: A Deep Dive 41


Listing 1: Benchmarking performance of LINQ You can use the following code snippet to run benchmarking
on a specific type:
public class BenchmarkLINQPerformance
{
private readonly List<string> var summary = BenchmarkRunner.Run
data = new List<string>(); <BenchmarkLINQPerformance>();

[GlobalSetup] Or you can use:


public void GlobalSetup()
{
var summary = BenchmarkRunner.Run
for(int i = 65; i < 90; i++)
{ (typeof(BenchmarkLINQPerformance));
char c = (char)i;
data.Add(c.ToString()); For the benchmark you created in the preceding section,
} you can use any of these statements in the Program class
} to execute the benchmark. Figure 1 shows the results of
the benchmark:
[Benchmark]
public string Single() =>
data.SingleOrDefault(x => x.Equals("M")); Interpreting the Benchmarking Results
As you can see in Figure 6, for each of the benchmarked
[Benchmark] methods, a row of the result data is generated. Because
public string First() => there are two benchmark methods called using three param
data.FirstOrDefault(x => x.Equals("M"));
values, there are six rows of benchmark result data. The
}
benchmark results show the mean execution time, garbage
collections (GCs), and the allocated memory.

Figure 1: Benchmarking results of Single() vs First() methods

42 Benchmarking .NET 6 Applications Using BenchmarkDotNet: A Deep Dive codemag.com


The Mean column shows the average execution time of using BenchmarkingConsoleDemo;
both the methods. As is evident from the benchmark re- using System.Runtime.InteropServices;
sults, the First method is much faster than the Single class Program
method in LINQ. The Allocated column shows the managed {
memory allocated on execution of each of these methods. static void Main(string[] args)
The Rank column shows the relative execution speeds of {
these methods ordered from fastest to slowest. Because BenchmarkRunner.Run
there are two methods here, it shows 1 (fastest) and 2 <BenchmarkStringBuilderPerformance>();
(slowest) for the First and Single methods respectively. }
}
Here’s what each of the legends represent:

• Method: This column specifies the name of the meth- Listing 2: Benchmarking performance of StringBuilder and StringBuildercache
od that has been benchmarked. [MemoryDiagnoser]
• Mean: This column specifies the average time or the [Orderer(BenchmarkDotNet.Order.
arithmetic mean of the measurements made on execu- SummaryOrderPolicy.FastestToSlowest)]
tion of the method being benchmarked. [RankColumn]
public class
• StdDev: This column specifies the standard deviation,
BenchmarkStringBuilderPerformance
i.e., the extent to which the execution time deviated {
from the mean time. const string message =
• Gen 0: This column specifies the Gen 0 collections "Some text for testing purposes only.";
made for each set of 1000 operations. const int CTR = 10000;
• Gen 1: This column specifies the Gen 1 collections }
made for each set of 1000 operations.
• Gen 2: This column specifies the Gen 2 collections
made for each set of 1000 operations. (Note that here, Listing 3: Continued from Listing 2
Gen 2 isn’t shown because there were no Gen 2 collec- [Benchmark]
tions in this example.) public void WithoutStringBuilderCache()
• Allocated: This column specifies the managed memory {
for (int i = 0; i < CTR; i++)
allocated for a single operation.
{
var stringBuilder =
Benchmarking StringBuilder Performance new StringBuilder();
Let’s now examine how you can benchmark the perfor- stringBuilder.Append(message);
mance of the StringBuilder class in .NET. Create a new class _ = stringBuilder.ToString();
named BenchmarkStringBuilderPerformance with the code }
}
in Listing 2.
[Benchmark]
Now, write the two methods for benchmarking performance public void WithStringBuilderCache()
of StringBuilder with and without using StringBuilder- {
Cache, as shown in Listing 3. The complete source code of for (int i = 0; i < CTR; i++)
the BenchmarkStringBuilderPerformance class is given in {
var stringBuilder =
Listing 4.
StringBuilderCache.Acquire();
stringBuilder.Append(message);
Executing the Benchmarks _= StringBuilderCache.
Write the following piece of code in the Program.cs file of GetStringAndRelease(stringBuilder);
the BenchmarkingConsoleDemo console application project }
to run the benchmarks: }

Listing 4: Benchmarking performance of StringBuilderCache


[MemoryDiagnoser] stringBuilder.Append(message);
[Orderer(BenchmarkDotNet.Order. _ = stringBuilder.ToString();
SummaryOrderPolicy.FastestToSlowest)] }
[RankColumn] }
public class
BenchmarkStringBuilderPerformance [Benchmark]
{ public void WithStringBuilderCache()
const string message = {
"Some text for testing purposes only."; for (int i = 0; i < CTR; i++)
const int CTR = 10000; {
var stringBuilder =
[Benchmark] StringBuilderCache.Acquire();
public void WithoutStringBuilderCache() stringBuilder.Append(message);
{ _= StringBuilderCache.
for (int i = 0; i < CTR; i++) GetStringAndRelease(stringBuilder);
{ }
var stringBuilder = }
new StringBuilder(); }

codemag.com Benchmarking .NET 6 Applications Using BenchmarkDotNet: A Deep Dive 43


Figure 2: Benchmarking StringBuilder performance

To execute the benchmarks, set the compile mode of the StringBuilderCache is an internal class that represents a
project to Release and run the following command in the per-thread cache with three static methods: Acquire, Re-
same folder where your project file resides: lease, and GetStringAndRelease. Here’s the complete source
code of this class: shorturl.at/dintW.
dotnet run -p
BenchmarkingConsoleDemo.csproj -c Release The Acquire method can acquire a StringBuilder instance.
The Release method can store the StringBuilder instance in
Figure 2 shows the result of the execution of the benchmarks. the cache if the instance size is within the maximum allowed
size. The GetStringAndRelease method is used to return a
The following code snippet illustrates how you can mark the string instance and return the StringBuilder instance to the
WithStringBuilderCache benchmark method as a baseline cache.
method.
When you run the benchmarks this time, the output will be
[Benchmark (Baseline = true)] similar to Figure 3.
public void WithStringBuilderCache()
{
for (int i = 0; i < CTR; i++) Benchmarking ASP.NET 6 Applications
{ In this section, you’ll examine how to benchmark ASP.NET 6
var stringBuilder = applications to retrieve performance data.
StringBuilderCache.Acquire();
stringBuilder.Append(message); Create a New ASP.NET 6 Project in Visual Studio 2022
_= StringBuilderCache. You can create a project in Visual Studio 2022 in several
GetStringAndRelease(stringBuilder); ways. When you launch Visual Studio 2022, you'll see the
} Start window. You can choose "Continue without code" to
} launch the main screen of the Visual Studio 2022 IDE.

44 Benchmarking .NET 6 Applications Using BenchmarkDotNet: A Deep Dive codemag.com


Figure 3: The performance of benchmark methods with one of them set as a baseline method

To create a new ASP.NET 6 Project in Visual Studio 2022:

1. Start the Visual Studio 2022 IDE.


2. In the “Create a new project” window, select “ASP.NET
Core Web API” and click Next to move on.
3. Specify the project name as BenchmarkingWebDemo
and the path where it should be created in the “Config-
ure your new project” window.
4. If you want the solution file and project to be cre-
ated in the same directory, you can optionally check
the “Place solution and project in the same directory”
checkbox. Click Next to move on.
5. In the next screen, specify the target framework and
authentication type as well. Ensure that the "Configure
for HTTPS," "Enable Docker Support," and the “Enable
OpenAPI support” checkboxes are unchecked because
you won’t use any of these in this example.
6. Because you'll be using minimal APIs in this example,
remember to uncheck the Use controllers (uncheck to
use minimal APIs) checkbox, as shown in Figure 4.
7. Click Create to complete the process.

Minimal API is a new feature added in .NET 6 that en-


ables you to create APIs with minimal dependencies.
You’ll use this application in this article. Let’s now get Figure 4: Enable minimal APIs for your Web API

codemag.com Benchmarking .NET 6 Applications Using BenchmarkDotNet: A Deep Dive 45


started benchmarking ASP.NET applications with a simple var factory = new WebApplicationFactory
method. <Startup>()
.WithWebHostBuilder(configuration =>
Get the Response Time in ASP.NET 6 {
You can easily get the response time of an endpoint using configuration.ConfigureLogging
BenchmarkDotNet. To execute the ASP.NET 6 endpoints, you (logging =>
can use the HttpClient class. To create an instance of Http- {
Client, you can use the WebApplicationFactory, as shown in logging.ClearProviders();
the code snippet given below: });

Listing 5: Benchmarking response time of an API


public class BenchmarkAPIPerformance });
{
private static HttpClient _httpClient; _httpClient =
factory.CreateClient();
[GlobalSetup] }
public void GlobalSetup()
{ [Benchmark]
var factory = new WebApplicationFactory public async Task GetResponseTime()
<Startup>() {
.WithWebHostBuilder(configuration => var response =
{ await _httpClient.GetAsync("/");
configuration. }
ConfigureLogging(logging => }
{
logging.ClearProviders();
});

Figure 5: Benchmarking results of the response time of an API endpoint

46 Benchmarking .NET 6 Applications Using BenchmarkDotNet: A Deep Dive codemag.com


}); Let’s create another entity class named ProductOptimized,
_httpClient = factory.CreateClient(); which is a replica of the Product class but optimized for im-
proving performance. The following code snippet illustrates
To benchmark the response time of an endpoint, you can use the ProductOptimized class:
the following code:
public struct ProductOptimized
[Benchmark] {
public async Task GetResponseTime() public int Id { get; set; }
{ public string Name { get; set; }
var response = public int Category { get; set; }
await _httpClient.GetAsync("/"); public decimal Price { get; set; }
} }

The complete source code is given in Listing 5 for your ref- In the ProductOptimized class, you’ve changed the data type of the
erence. The benchmark results are shown in Figure 5. ID and the Category properties of the Product class with integers.

Create the Product Repository


Real-World Use Case of Create a new class named ProductRepository in a file having
BenchmarkDotNet the same name with a .cs extension. Now write the following
In this section, you’ll examine how to take advantage of code in there:
BenchmarkDotNet to measure the performance of an appli-
cation, determine the slow running paths, and take neces- public class
sary steps to improve the performance. You’ll use an entity ProductRepository :
class named Product that contains a Guid field named Id. IProductRepository
Note that a call to Guid.NewGuid consumes resources and {
is slow.
}
If you replace the Guid property with an int property, it
consumes significantly fewer resources and improves per-
formance. You’ll create an optimized version of the Product The ProductRepository class illustrated in the code snippet
class and then benchmark the performance of both these below, implements the methods of the IProductRepository
classes. interface. Here is how this interface should look:

Create the Entity Classes public interface IProductRepository


In the Solution Explorer Window, right-click on the project {
and create a new file named Product with the following code public Task<List<Product>> GetAllProducts();
in there: public Task<List<ProductOptimized>>
GetAllProductsOptimized();
public class Product }
{
public Guid Id { get; set; } The ProductRepository class implements the two methods of
public string Name { get; set; } the IProductRepository interface:
public string Category { get; set; }
public decimal Price { get; set; } public Task<List<Product>>
} GetAllProducts()

Listing 6: The GetProducts and GetProductsOptimized methods


private List<Product> GetProductsInternal() GetProductsOptimizedInternal()
{ {
List<Product> products = List<ProductOptimized> products = new
new List<Product>(); List<ProductOptimized>(1000);

for(int i=0; i<1000;i++) for (int i = 0; i < 1000; i++)


{ {
Product product = new Product ProductOptimized product =
{ new ProductOptimized
Id = Guid.NewGuid(), {
Name = "Lenovo Legion", Id = i,
Category = "Laptop", Name = "Lenovo Legion",
Price = 3500 Category = 1,
}; Price = 3500
} };
return products; }
} return products;
}
private List<ProductOptimized>

codemag.com Benchmarking .NET 6 Applications Using BenchmarkDotNet: A Deep Dive 47


{ The GetProductsInternal method creates a List of the Prod-
return Task.FromResult uct class. It uses the Guid.NewGuid method to generate new
(GetProductsInternal()); Guids for the ID field. Hence, it creates 1000 new Guids,
} one for each instance of the Product class. Contrarily, the
GetProductsOptimizedInternal method creates a List of the
public Task<List<ProductOptimized>> ProductOptimized class. In this class, the ID property is an
GetAllProductsOptimized() integer type. So, in this method, 1000 new integer IDs are
{ created. Create new Guids is resource intensive and much
return Task.FromResult slower than creating an integer.
(GetProductsOptimizedInternal());
} Note that this implementation has been made as simple as
possible because my focus is on how you can benchmark the
Although the GetAllProducts method returns a list of the performance of these methods.
Product class, the GetAllProductsOptimized method returns
a list of the ProductOptimized class you created earlier. The source code given in Listing 6 illustrates the GetProd-
These two methods call the private methods named Get- uctsInternal and GetProductsOptimizedInternal methods.
ProductsInternal and GetProductsOptimizedInternal respec- Note that in the GetProductsOptimizedInternal method, a
tively. These private methods return a List of Product and list of ProductOptimized entity class is created and the size
ProductOptimized class respectively. of the list has been specified as well.

Figure 6: The benchmarking results of the GetProducts and GetProductsOptimized methods

48 Benchmarking .NET 6 Applications Using BenchmarkDotNet: A Deep Dive codemag.com


Listing 7: Benchmarking performance of GetProducts and GetProductsOptimized API methods
[MemoryDiagnoser]
public class BenchmarkAPIPerformance [Benchmark]
{ public async Task GetProducts()
private static HttpClient _httpClient; {
for(int i = 0;i < N; i++)
[Params(1, 25, 50)] {
public int N; var response =
await _httpClient.
[GlobalSetup] GetAsync("/GetProducts");
public void GlobalSetup() }
{ }
var factory =
new WebApplicationFactory<Startup>() [Benchmark]
.WithWebHostBuilder(configuration => public async Task GetProductsOptimized()
{ {
configuration. for (int i = 0; i < N; i++)
ConfigureLogging(logging => {
{ var response =
logging.ClearProviders(); await _httpClient.
}); GetAsync("/GetProductsOptimized");
}); }
_httpClient = factory.CreateClient(); }
} }

Create the Endpoints on a single method, module, or entire application to check SPONSORED SIDEBAR:
You’ll create two endpoints, GetProducts and GetProduct- the performance of the code without affecting its func-
sOptimized, and then benchmark them. Because you’re us- tionality. Remember that to improve the performance and Get .NET 6 Help for Free
ing minimal API in this example, write the following code scalability of your application, you must adhere to the best
snippet in the Program class of your ASP.NET 6 Web API proj- practices, if not, merely benchmarking your application’s How does a FREE hour-long
ect to create the two endpoints: code won’t help. CODE Consulting virtual
meeting with our expert
.NET consultants sound?
app.MapGet("/GetProducts", async  Joydip Kanjilal
Yes, FREE. No strings.
(IProductRepository productRepository) => 
No commitment. No credit
{
cards. Nothing to buy.
return Results.Ok(await
For more information,
productRepository.GetAllProducts()); visit www.codemag.com/
}); consulting or email us at
info@codemag.com.
app.MapGet("/GetProductsOptimized", async
(IProductRepository productRepository) =>
{
return Results.Ok(await
productRepository.GetAllProductsOptimized());
});

Create the Benchmarks


Let’s now create the benchmarking class that contains the
methods to be benchmarked using BenchmarkDotNet. To
do this, create a class named BenchmarkManager in a file
with the same name and a .cs extension and write the code
shown in Listing 7 in there.

The two methods that need to be benchmarked are the


GetProducts and GetProductsOptimized methods. Note the
Benchmark attribute on each of these methods. These two
methods use the HttpClient instance to execute the two end-
points GetProducts and GetProductsOptimized respectively.

Figure 6 shows the output of the execution of benchmarks. As you


can see, the GetProductsOptimized consumes less memory and is
much faster than its counterpart, i.e., the GetProducts method.

Conclusion
BenchmarkDotNet is a compelling and easy-to-use frame-
work to benchmark .NET code. You can execute a benchmark

codemag.com Benchmarking .NET 6 Applications Using BenchmarkDotNet: A Deep Dive 49


ONLINE QUICK ID 2209071

Event Sourcing and CQRS with Marten


In this article, I’m going to examine the usage of Event Sourcing and the Command Query Responsibility Segregation
(CQRS) architectural style through a sample telehealth medicine application. For tooling, I’m going to use the Marten library
(https://martendb.io), which provides robust support for Event Sourcing on top of the Postgresql database engine. Before I

move on to Event Sourcing though, let’s think about the tabase. Different use cases will have different needs, so at
typical role of a database within your systems that don’t various times I might need to write explicit code to:
use Event Sourcing. For most of my early career as a soft-
ware professional, I assumed that system state would be • Map the middle tier model to the database tables
persisted in a relational database that would act as the • Map incoming input from outside the system to the
source of truth for the system. Very frequently, I’ve visual- database tables
ized systems with something like the simple layered view • Translate the internal database to different structures
shown in Figure 1. for outgoing data in Web services

Jeremy D. Miller With this typical architecture, almost all input and output The point I’m trying to make here is that the single database
jeremydmiller@yahoo.com of the system will involve reading or writing to this one da- model can easily bring with it a fair amount of mechanical
www.jeremydmiller.com
@jeremydmiller
Jeremy Miller is the Senior
Director of Software Archi-
tecture at MedeAnalytics.
Jeremy began his software
career writing “Shadow IT”
applications to automate
his tedious engineer-
ing documentation, then
wandered into software
development because it
looked like more fun.
Jeremy is heavily involved
in open source .NET
development as the lead
developer of Marten,
Lamar, Alba, and other
projects in the JasperFx
family. Jeremy occasionally
manages to write about
various software topics
at http://jeremydmiller.com.

50 Event Sourcing and CQRS with Marten codemag.com


effort in translating the one database structure to the spe- Moreover, because this is related to health care, I should
cific needs of various system input or output—and I’ll need plan on having some pretty stringent requirements for au-
to weigh this effort when I compare the one database model diting all activity within the system through a formal audit
to the Event Sourcing and CQRS approach I’ll examine later. log.

I’ve also long known that in the industry, no one database


structure can be optimized for both reading and writing, so
I might very well try to support a separate, denormalized This sample problem domain
database specifically for reporting. That reporting database
will need to be synchronized somehow from the main trans-
is based on a project I was a part
actional database. This is just to say that the idea of hav- of where I successfully used
ing essentially the same information in multiple databases Event Sourcing, but on a very
within the software architectures is not exactly new.
different technical stack.
Alternative approaches using Event Sourcing or CQRS can
look scary or intimidating upon your first exposure. Myself,
I walked out after a very early software conference presen-
tation in 2008 on what later became known as CQRS shak-
ing my head and thinking that it was completely crazy and Event Sourcing
would never catch on, yet here I am, recommending this Event Sourcing is an architectural approach to data persis-
pattern for some systems. tence that captures all incoming data as an append-only
event log. In effect, you’re making the logical change log into
the single source of truth in your system. Rather than model-
Telehealth System Example ing the current state of the system and trying to map every
Before diving into the nomenclature or concepts around incoming input and outgoing query to this centralized state,
Event Sourcing or CQRS architectures, I want to consider a event sourcing explicitly models the changes to the system.
sample problem domain. Hastened by the COVID pandem-
ic, “telehealth” approaches to health care, where you can So how does it work? First, let’s do some modeling of the
speak to a health care provider online without having to online telehealth system. Just considering events around
make an in-office visit, rapidly became widespread. Imagine the online appointments I might model events for:
that I’m tasked with building a new website application that
allows potential patients to request an appointment, con- • Appointment Requested
nect with a provider (physician, nurse, nurse practitioner, • Appointment Scheduled
etc.), and host the online appointments.

The system will need to provide functionality to coordinate


the on-call providers. In this case, I’m going to attempt to
schedule the appointments as soon as a suitable provider
is available, so I’ll need to be able to estimate expected
wait times. I do care about patient happiness and want
the providers working with the system and to have a good
experience with the system, so it’s going to be important
to be able to collect a lot of metrics to help adjust staff-
ing. Moreover, I need to plan for problems during a normal
business day and give the administrative users the ability to
understand what transpired during the day that might have
caused patient wait times to escalate. Figure 1: Traditional layered architecture

codemag.com Event Sourcing and CQRS with Marten 51


Figure 2: Scary, complicated CQRS architecture

• Appointment Started The events are organized into streams of related events
• Appointment Finished that model a single workflow within the system. In the tele-
• Appointment Cancelled health system, there are event streams for:

These events are small types carrying data that models • Appointments
the change of state whenever they’re recorded. Do note • Provider Shift to model the activity of a single provider
that the event type names are expressed in the past tense during a single day
and are directly related to the business domain. The event
name by itself can be important in understanding the Although events in an Event Sourcing approach are the
system behavior. As an example, here’s a C# version of source of truth, you do still need to understand the cur-
AppointmentScheduled that models whether the appoint- rent system state to support incoming system commands or
ment is assigned to a certain provider (medical profes- supply clients with queries against the system state. This is
sional): where the concept of a projection comes into play. A projec-
tion is a view of the underlying events suitable for providing
public record AppointmentScheduled( the write model to validate or apply incoming commands or
Guid ProviderId, a read model that’s suitable for usage by system queries.
DateTimeOffset EstimatedTime If you’re familiar with the concept of materialized views in
); relational database engines, a projection in a system based
on Event Sourcing plays a very similar role.
Taking Marten as a relatively typical approach, these event
objects are persisted in the underlying database as seri- The advantages, or maybe just the applicability, of Event
alized JSON in a single table that’s ordered sequentially Sourcing are:
by the order in which the events are appended. Like oth-
er event stores, Marten also tracks metadata about the • It creates a rich audit log of business activity.
events, like the time the event was captured, and poten- • It supports the concept of “Time Travel” or temporal
tially more data related to distributed tracing, like correla- querying to be able to analyze the state of the system
tion identifiers. in the past by selectively replaying events.

52 Event Sourcing and CQRS with Marten codemag.com


• Using Event Sourcing makes it possible to retrofit po- The next step—but don’t think for a minute that this must
tentially valuable metrics about the system after the be a linear flow and that you shouldn’t iterate between
fact by again replaying the events. steps at any time—is to identify the commands or input to
• Event Sourcing fits well with asynchronous program- the system that will cause the previously identified events
ming models and event-driven architectures. in the system. These commands are recorded as blue notes
• Having the event log can often clarify system behav- just to the left of the event or events that the command may
ior. cause in the system. The nomenclature, in this case, is in
the present tense, like “Request Appointment.”
Command Query Responsibility In the third step, try to identify the business entities you’ll
Segregation need in order to process the incoming command inputs and
CQRS is an architectural pattern that calls for the system decide which events should be raised. In Event Storming
state to be divided into separate models. The write model (and Event Sourcing) nomenclature, these are referred to as
is optimized for transactions and is updated by incoming “Aggregates.” In the case of the telehealth, I’ve identified
commands. The read model is optimized for queries to the need to have an Appointment aggregate that reflects
the system. As you’ll rightly surmise, something has to the current state of an ongoing or requested patient ap-
synchronize and transform the incoming write model to pointment and a “Provider Shift” to track the time and ac-
the outgoing read model. That leads to the architectural tivity of a provider during a particular day. These aggregates
diagram in Figure 2, which I think of as the “scary view are captured in yellow cards and posted to the board.
of CQRS.”
Beyond that, you can optionally use: Keep Your Streams Short
In this common usage of CQRS, there’s some kind of back-
Although it’s technically
ground process that’s asynchronously and continuously ap- • Green cards to denote informational views that users
possible and maybe a little
plying data updates to the write model to update the read of the system need to access to carry out their work. tempting to just throw all
model. This can lead to extra complexity through more nec- In the case of the telehealth system, I’m calling out your events into one logical
essary infrastructure compared to the classic “one database the need for a Board view that represents a related stream, you’re more likely
model” system model. I don’t believe this is necessarily group of appointments and providers during a single to be successful by dividing
more code than using the traditional one database model. workday. For example, pediatric appointments in the the event store into shorter
Rather, I would say that the hidden mapping and translation state of Texas on July 18, 2022 are a single Board. streams.
code in the one database model is much more apparent in • Significant business logic processes that potentially
the CQRS approach. create one or more domain events are recorded in Marten can be vulnerable to
purple notes. In the telehealth example, there’s go- concurrent access problems if
Event Sourcing and CQRS can be used independently of each ing to be some kind of “matching logic” that tries to appending simultaneously to
other, but you’ll very frequently see these two techniques match appropriate providers with the incoming ap- the same stream. Separating
used together. Fortunately, as I’ll show in the remainder of pointments based on a combination of availability, the event store into smaller
this article, Marten can help you create a simpler architec- specialty, and the licensure of the provider. streams avoids that issue.
ture for CQRS with Event Sourcing than the diagram above. • External system dependencies can be written down
on pink cards to record their existence. In this case,
I’ll probably use Twilio, or something similar, to host
Requirements through Event Storming any kind of embedded chat or teleconferencing, so I’m
Event Storming (https://www.eventstorming.com/) is a noting that in the Event Storming session.
very effective requirements workshop format to help a de-
velopment team and their collaborating business partners Figure 3 shows a sample for what an Event Storming session
understand the requirements for a software system. As the on the telehealth system might look like.
name suggests, Event Storming is a natural fit with Event
Sourcing (and CQRS architectures). I’m a big fan of Event Storming to discover requirements
and to create a common understanding of the business do-
Although there are software tools to do Event Storming main. Event Storming stands apart from many traditional
sessions online, the easiest way to get started with Event requirements elicitation techniques by directly pointing the
Storming is to grab some colored sticky notes, a couple of way toward the artifacts in your code. Event Storming ses-
markers, and convene a session with both the development sions are a great way to discover the ubiquitous language
team and the business domain experts near a big white- for the system that is a necessary element of doing domain-
board or a blank wall. driven development (DDD).

The first step is to start brainstorming on the domain events


within the business processes. As you discover these logical Getting Started with Marten
events, you’ll write the event name down on an orange card and To get started with Marten as your event store, you’ll first
stick it on the board. As an example from the telehealth prob- need a Postgresql database. My preference for local devel-
lem domain, some events might be “Appointment Requested” opment is to use Docker to run the development database,
or “Appointment Scheduled” or “Appointment Cancelled.” Note and this is a copy of a docker-compose.yaml file that will
that these events are named tersely and are expressed in the get you started:
past tense. As much as possible, you want to try to organize the
events in the sequential order in which they occur within the version: '3'
system. If using a whiteboard, I also like to add some ad hoc services:
arrows to delineate possible branching or relationships, but postgresql:
that’s not part of the formal Event Storming approach. image: "clkao/postgres-plv8:latest"

codemag.com Event Sourcing and CQRS with Marten 53


Figure 3: Event Storming sample

ports: "marten": "connection string"


- "5433:5432" }
}
Assuming that you have Docker Desktop installed on your
local development computer, you just need to type this in To enable some administrative command line tooling that
your command line at the same location as the file above: I’ll use later, replace the last line of code in the generated
Program file with this call:
docker compose up -d
// This is using the Oakton library
The command above starts the Docker container in the await app.RunOaktonCommands(args);
background. Next, let’s start a brand new ASP.NET Core Web
API project with this command:
Appending Events with Marten
dotnet new webapi Marten (https://martendb.io) started its life as a library to
allow .NET developers to exploit the robust JSON support in
And let’s add a reference to Marten with some extra com- the Postgresql database engine as a full-fledged document
mand line utilities you’ll use later with: database with a small event sourcing capability bolted onto
the side. As Marten and Marten’s community have grown,
dotnet add package Marten.CommandLine the event sourcing functionality has matured and probably
drives most of the growth of Marten at this point.
Switching to the application bootstrapping in the Program
file created by the dotnet new template that I used, I’ll add In the telehealth system, I’ll write the very simplest pos-
the following code: sible code to append events for the start of a new Provid-
erShift stream. First though, let’s add some event types for
builder.Services.AddMarten(opts => the ProviderShift workflow:
{
var connString = builder public record ProviderAssigned(
.Configuration Guid AppointmentId);
.GetConnectionString("marten"); public record ProviderJoined(Guid BoardId,
Guid ProviderId);
opts.Connection(connString); public record ProviderReady();
public record ProviderPaused();
// There will be more here later... public record ProviderSignedOff();
}); public record ChartingFinished();
public record ChartingStarted();
Last, I’ll add an entry to the appsettings.json file for the
database connection string: I’m assuming the usage of .NET 6 or above here, so it’s le-
gal to use C# record types. That isn’t mandatory for Mar-
{ ten usage, but it’s convenient because events should never
"ConnectionStrings": { change during the lifetime of the system. Mostly for me

54 Event Sourcing and CQRS with Marten codemag.com


though, using C# records just makes the code very terse Listing 1: Stream Table
and easily readable.
CREATE TABLE mt_streams (
id uuid NOT NULL,
If you’re interested, the underlying table structure for type varchar NULL,
streams and events that Marten generates is shown in List- version bigint NULL,
ing 1 and Listing 2. timestamp timestamptz NOT NULL DEFAULT (now()),
snapshot jsonb NULL,
You’ll also notice that I’m not adding a lot of members to snapshot_version integer NULL,
created timestamptz NOT NULL DEFAULT (now()),
most of the events. As you’ll see in the next code sample, tenant_id varchar NULL DEFAULT '*DEFAULT*',
Marten tags all these captured events to the provider shift is_archived bool NULL DEFAULT FALSE,
ID anyway. Just the name of the event type by itself denotes CONSTRAINT pkey_mt_streams_id PRIMARY KEY (id)
a domain event, so that’s informative. In addition, Marten );
tags each event captured with metadata like the event type,
the version within the stream, and, potentially, correlation
and causation identifiers.
Listing 2: Events Table
Now, on to appending events with Marten. In the following CREATE TABLE mt_events (
code sample, I spin up a new Marten DocumentStore that’s seq_id bigint NOT NULL,
the root of any Marten usage, then start a new ProviderShift id uuid NOT NULL,
stream with a couple initial events: stream_id uuid NULL,
version bigint NOT NULL,
data jsonb NOT NULL,
// This would be an input type varchar(500) NOT NULL,
var boardId = Guid.NewGuid(); timestamp timestamp with time zone NOT NULL DEFAULT '(now())',
tenant_id varchar NULL DEFAULT '*DEFAULT*',
var store = DocumentStore mt_dotnet_type varchar NULL,
is_archived bool NULL DEFAULT FALSE,
.For("connection string");
CONSTRAINT pkey_mt_events_seq_id PRIMARY KEY (seq_id)
);
using var session = store.LightweightSession();
ALTER TABLE mt_events
session.Events.StartStream<ProviderShift>( ADD CONSTRAINT fkey_mt_events_stream_id FOREIGN KEY(stream_id)
new ProviderJoined(boardId), REFERENCES cli.mt_streams(id)ON DELETE CASCADE;
new ProviderReady()
);

await session.SaveChangesAsync(); public string Name { get; init; }


public Guid? AppointmentId { get; set; }
Similar to Entity Framework Core’s DbContext type, the
Marten IDocumentSession represents a unit of work that I // More here in just a minute...
can use to organize transactional boundaries by gathering }
up work that should be done inside of a single transaction,
then helping to commit that work in one native Postgresql Hopefully you’ll be able to trace how all of this information
transaction. From the Marten side of things, it’s perfectly could be gleaned from the event records like ProviderReady
possible to capture events for multiple event streams and that I defined earlier. In essence, what you need to do is to
even a mix of document updates within one IDocument- apply the “left fold” concept from functional programming
Session. to combine all the events for a single ProviderShift event
stream into that structure above.
Projections with Marten The one exception is the ProviderShift.Version property.
Now that you know how to append events, the next step is One of Marten’s built-in naming conventions (which can be
to have the provider events projected into a write model overridden) is to treat any public member of an aggregated
representing the state of the ProviderShift that you’ll need type with the name “Version” as the stream version, such
later. That’s where Marten’s projection model comes into that when Marten applies the events to update the project-
play. ed document, this member is set by Marten to be the most
recent version number of the stream itself. To make that
As a simple example, let’s say that you want all of the pro- concrete, if a ProviderShift stream contains four events,
vider events for a single ProviderShift rolled up into this then the version of the stream itself is 4.
data structure:
As the simplest possible example, I’m going to use Mar-
public class ProviderShift ten’s self-aggregate feature to add the updates by event
{ directly to the ProviderShift type up above. Do note that
public Guid Id { get; set; } it’s possible to use an immutable aggregate type for this
public int Version { get; set; } inside of Marten, but I’m choosing to use a mutable object
public Guid BoardId { get; private set; } type just because that leads to simpler code. In real usage,
public Guid ProviderId { get; init; } be aware that opting for immutable aggregate types works
public ProviderStatus Status { the garbage collection in your system harder by spawning
get; private set; } more object allocations. Also, be careful with immutable

codemag.com Event Sourcing and CQRS with Marten 55


aggregates because that can occasionally bump into JSON // for the just concluded appointment
serialization issues that are easily avoidable with mutable public void Apply(ChartingStarted charting) =>
aggregate types. Status = ProviderStatus.Charting;

In this case, the event stream within the application should Again, to be clear, these methods are added directly to the
be started with the ProviderJoined event, so I’ll add a meth- ProviderShift class to teach Marten how to apply events to
od to the ProviderShift type up above that creates a new the ProviderShift aggregate.
ProviderShift object to match that initial ProviderJoined
event, like so: Let’s move on to applying the aggregate with Marten’s “live
aggregation” mode:
public static async Task<ProviderShift> Create(
ProviderJoined joined, public async Task access_live_aggregation(
IQuerySession session) IQuerySession session,
{ Guid shiftId)
var p = await session {
.LoadAsync<Provider>(joined.ProviderId); // Fetch all the events for the stream, and
// apply them to a ProviderShift aggregate
return new ProviderShift var shift = await session
{ .Events
Name = $"{p.FirstName} {p.LastName}", .AggregateStreamAsync<ProviderShift>(
Events are Immutable Status = ProviderStatus.Ready, shiftId);
ProviderId = joined.ProviderId, }
In most of the literature you’ll
BoardId = joined.BoardId
see about Event Sourcing, the
strong recommendation is }; In the code above, IQuerySession is a read-only version of
to assume that event data is } Marten’s IDocumentSession that’s available in your appli-
immutable. That’s not to say cation’s Dependency Injection container in a typical .NET
that you should plan on event A couple notes about the code above: Core application. The code above is fetching all the captured
data being infallible. events for the stream identified by shiftId, then passed one
• There’s no interface or mandatory base class of any at a time, in order, to the ProviderShift aggregate to create
Rather than reaching into the kind from Marten in this usage, just naming conven- the current state from the events.
database to correct erroneous tions.
event data, you can use • The method name Create() with the first argument This usage queries for every single event for the stream,
additional, corrective events type being ProviderJoined exercises a naming con- and deserializes each event object from persisted JSON in
to “fix” any errors. vention in Marten to identify this method as taking the database, so it could conceivably get slow as the event
part in the projection. stream grows. Offhand, I’m guessing that I’m probably okay
• The Marten team urges some caution with this, but it’s with the ProviderShift aggregation only happening “live,”
possible to query Marten for additional information but I do have other options.
inside the Create() method by passing in the Marten
IQuerySession object. The second option is to use Marten’s “inline” lifecycle to
• As implied by this code, it’s quite possible with Marten apply changes to the projection at the time that events are
to store reference or relatively static data like basic captured. To use this, I’m going to need to do just a little bit
information about a provider (name, phone number, of configuration in the Marten set up:
qualifications) in a persisted document type while also
using the Marten event store capabilities. var store = DocumentStore.For(opts =>
{
Now let’s add some additional methods to handle other opts.Connection(“connection string”);
event types. The easiest thing to do is to add more methods opts.Projections
named Apply(event type) like this one: .SelfAggregate<ProviderShift>(
ProjectionLifecycle.Inline);
public void Apply(ProviderReady ready) });
{
AppointmentId = null; Now, when I capture events against a ProviderShift event
Status = ProviderStatus.Ready; stream, Marten applies the new events to the persisted
} ProviderShift aggregate for that stream, and updates the
aggregated document and appends the events in the same
public void Apply(ProviderAssigned assigned) transaction for strong consistency:
{
Status = ProviderStatus.Assigned; var shiftId = session.Events.StartStream<ProviderShift>(
AppointmentId = assigned.AppointmentId; new ProviderJoined(boardId),
} new ProviderReady()
).Id;
Or even better, if the resulting method can be a one line,
use the newer C# method expression option: // The ProviderShift aggregate will be
// updated at this time
// This is kind of a catch all for any paperwork await session.SaveChangesAsync();
// the provider has to do after an appointment

56 Event Sourcing and CQRS with Marten codemag.com


// Load the persisted ProviderShift right out public async Task time_travel(
// of the database IQuerySession session,
var shift = await session Guid shiftId,
.LoadAsync<ProviderShift>(shiftId); DateTimeOffset atTime)
{
Right here, you can hopefully see the benefit of Marten // Fetch all the events for the stream, and
coming with both a document database feature set and // apply them to a ProviderShift aggregate
the event store functionality. Without any additional con- var shift = await session
figuration, Marten can store the projected ProviderShift .Events
documents directly to the underlying Postgresql database. .AggregateStreamAsync<ProviderShift>(
shiftId,
Lastly, there’s one last choice. I can use eventual consis- timestamp:atTime);
tency and allow the ProviderShift aggregate to be built in }
an asynchronous manner in background threads. This is
going to require a little more configuration, though, as In this usage, Marten queries for all the events for the given
I need to be using the full application bootstrapping, as ProviderShift stream up to the point in time expressed by
shown below: the atTime argument and calculating the projected state
at that time. Inside of this fictional telehealth system, it
builder.Services.AddMarten(opts => might very well be valuable for the business to replay events
{ throughout the day to understand how the appointments
// This would typically come from config and provider interaction played out and diagnose schedul-
opts.Connection("connection string"); ing delays.

opts.Projections
.SelfAggregate<ProviderShift>( Projecting Events to a Flat Table
ProjectionLifecycle.Async); One of the advantages of Marten is that it allows you to
}) be flexible in your persistence approach within a single
database engine without having to introduce yet more in-
// This adds a hosted service to run frastructure. Marten was originally built to be a document
// asynchronous projections in the background database with a nascent event store capability over the top
.AddAsyncDaemon(DaemonMode.HotCold); of the existing Postgresql database engine, but the event
store functionality has matured greatly since then. In addi-
As shown in Figure 4, Marten has an optional subsystem tion, Postgresql is a great relational database engine, so I
called the “async daemon” that’s used to process asynchro- can even take advantage of that and write projections that
nous projections with an eventual consistency model in a write some of the events to a plain old SQL table.
background process.
Back to the fictional telehealth system, one of the features
The async daemon runs as a .NET IHostedService in a back- I’ll absolutely need is the ability to predict the wait times
ground thread. The daemon constantly scans the underlying
event store tables and applies new events to the registered pro-
jections. In the case of the ProviderShift aggregation, the async
daemon applies new incoming events like the ProviderReady or
ProviderAssigned events that are handled by the ProviderShift
aggregate to update the ProviderShift aggregate documents
and persists them using Marten’s document database function-
ality. The async daemon comes with guarantees to:

• Apply events in sequential order


• Apply all events at least once

The async daemon is an example of eventual consistency Figure 4: CQRS with Marten
where the query model (the ProviderShift aggregate in this
case) is updated to match the incoming events rather than
the strong consistency model allowed by Marten’s inline Task Description
projection lifecycle.
Live The projected documents are evaluated from the raw events on demand. This
lifecycle is recommended for short event streams or in cases where you want to
To summarize the projection lifecycles in Marten and their
optimize much more for fast writes with few reads.
applicability, refer to Table 1.
Inline The projected documents are updated and persisted at the time of event capture,
and in the same database transaction for a strong consistency model.
Time Travel Async Projections are updated from new events in a background process. This lifecycle
One of the advantages of using Event Sourcing is the ability should be used any time there’s a concern about concurrent updates to a projected
to use “time travel” to replay events up to a certain time document and should almost always be used for projections that span multiple
to recreate the state of the system at a certain time or at a event streams.
certain revision. In the sample below, I’m going to recreate
the state of a given ProviderShift at a time in the past: Table 1: Marten Projection Lifecycles

codemag.com Event Sourcing and CQRS with Marten 57


that patients should expect when they request an appoint- .Add<AppointmentDurationProjection>(
ment. To support that calculation, the system needs to track ProjectionLifecycle.Async);
statistics about how long appointments last during differ-
ent times of the day. To that end, I’m going to add another // OR ???
projection against the same events I’m already capturing,
but this time, I’m going to use Marten’s EventProjection opts.Projections
recipe that allows me to be more explicit about how the .Add<AppointmentDurationProjection>(
projection handles events. ProjectionLifecycle.Inline);
});
First, I’m going to start a new class for this projection and
define through Marten itself what the table structure is: There’s a decision to be made about the new Appointment-
DurationProjection that I’m adding to a system that’s
public AppointmentDurationProjection() already in production. If I make the AppointmentDura-
{ tionProjection asynchronous and deploy that change to
// Defining an extra table so Marten production, the Marten async daemon attempts to run every
// can manage it for us historical event from the beginning of the system through
var table this new projection until it has eventually reached what
= new Table("appointment_duration"); Marten calls the “high water mark” of the event store, and
table.AddColumn<Guid>("id") then continues to process new incoming events at a normal
.AsPrimaryKey(); pace.
table.AddColumn<DateTimeOffset>("start");
table.AddColumn<DateTimeOffset>("end");

SchemaObjects.Add(table); There’s the concept of stream


}
archival in Marten that you
// more later... can use to avoid the potential
performance problem of having
Next, using Marten’s naming conventions, I’m going to add
a method that handles the AppointmentStarted event in to replay every event from
this projection: the beginning of the system.
public void Apply(
IEvent<AppointmentStarted> @event,
IDocumentOperations ops) If, instead, I decide to make the new AppointmentDura-
{ tionProjection run inline with event capture transactions,
var sql = "insert into appointment_duration" that new table only reflects events that are captured from
+ " (id, start) values (?, ?)"; that point on. And maybe that’s perfectly okay for the pur-
ops.QueueSqlCommand(sql, poses here.
@event.Id,
@event.Timestamp); But what if, instead, I want that new projection to run inline
} and also want it applied to every historical event? That’s the
topic of the next section.
And an additional method for the AppointmentFinished
event:
Replaying Events or Rebuilding
public void Apply( Projections
IEvent<AppointmentFinished> @event, It’s an imperfect world, and there will occasionally be rea-
IDocumentOperations ops) sons to rebuild the stored document or data from a pro-
{ jection against the persisted events. Maybe I had a reason
var sql = "update appointment_duration " to change how the projection was created or structured?
+ "set end = ? where id = ?"; Maybe I’ve added a new projection? Maybe, due to inter-
ops.QueueSqlCommand(sql, mittent errors of some sort, the async daemon had to skip
@event.Timestamp, over some missing events or there was some sort of “poison
@event.Id); pill” event that Marten had to skip over due to errors in the
} projection code?

The next step is to add this new projection to the system by The point is that the events are the single source of truth,
revisiting the AddMarten() section of the Program file and the stored projection data is a read only view of that raw
adding that projection like so: data, and I can rebuild the projections from the raw events
later.
builder.Services.AddMarten(opts =>
{ Here’s an example of doing this rebuild programmatically:
// other configuration...
public async Task rebuild_projection(
opts.Projections IDocumentStore store,

58 Event Sourcing and CQRS with Marten codemag.com


CancellationToken cancellation) dotnet run -- projections --rebuild
{ -p AppointmentDurationProjection
// create a new instance of the async daemon
// as configured in the document store This command line usage was intended for both develop-
using var daemon = await store ment or testing time, but also for scripting production de-
.BuildProjectionDaemonAsync(); ployments.

await daemon The Marten team and community, of course, looks forward
.RebuildProjection to the day when Marten is able to support a “zero down-
<AppointmentDurationProjection>( time” projection rebuild model.
cancellation);
}
Command Handlers with Marten
That code deletes any existing data in the appointment_ I’ve spent a lot of time talking about Event Sourcing so far,
duration table, reset Marten’s record of the progress of the but little about CQRS, so let’s amend that by considering
existing projection, and start to replay all non-archived the code that you’d need to write as a command handler.
events in the system from event #1 all the way to the known As part of the telehealth system, the providers need to per-
“high water mark” of the event store at the beginning of form a business activity called “charting” at the end of each
this operation. patient appointment where they record whatever notes or
documentation is required to close out the appointment.
This can function, simultaneously with the running applica- The telehealth system absolutely needs to track the time
tion, as long as the projection being rebuilt isn’t also run- that providers spend charting.
ning in the application.
To mark the end of the charting activity, the system needs
To make this functionality easier to access and apply at to accept a command message from the provider’s user in-
deployment time, Marten comes with some command line terface client that might look something like this:
extensions to your .NET application with the Marten.Com-
mandLine library. Marten.CommandLine works with the public record CompleteCharting(
related Oakton (https://jasperfx.github.io/oakton) library Guid ShiftId,
that allows .NET developers to expose additional command int Version
line tools directly to their .NET applications. );

Assuming that your application has a reference to Marten. To write the simplest possible ASP.NET Core controller end-
CommandLine, you can opt into the extended command line point method that handles this incoming command, verifies
options with this line of code in your Program file: the request against the current state of the ProviderShift,
and raises a new ChartingFinished event, I’ll write this
// This is using the Oakton library code:
await app.RunOaktonCommands(args);
public async Task CompleteCharting(
From the command line at the root of your project using the [FromBody] CompleteCharting charting,
Marten.CommandLine library, type: [FromServices] IDocumentSession session)
{
dotnet run –- help projections var shift = await session
.LoadAsync<ProviderShift>(
to access the built-in usage help for the Oakton commands charting.ShiftId);
active in your system. With Marten.CommandLine, you
should see some text output like this: // Validate the incoming data before making
// the status transition
projections - Marten's asynchronous projection... if (shift.Status != ProviderStatus.Charting)
└── Marten's asynchronous projection… {
└── dotnet run -- projections throw new Exception("invalid request");
├── [-i, --interactive] }
├── [-r, --rebuild]
├── [-p, --projection <projection>] var finished = new ChartingFinished();
├── [-s, --store <store>] session.Events.Append(
├── [-l, --list] charting.ShiftId,
├── [-d, --database <database>] finished);
├── [-l, --log <log>]
├── [-e, --environment <environment>] await session.SaveChangesAsync();
├── [-v, --verbose] }
├── [-l, --log-level <loglevel>]
└── [----config:<prop> <value>] The big thing I missed up there is any kind of concurrency
protection to verify that either I’m not erroneously receiv-
To rebuild only the new AppointmentDurationProjection ing duplicate commands for the same ProviderShift or that
from the command line, type this at the command line at I want to force the commands against a single ProviderShift
the root of the telehealth system: to be processed sequentially.

codemag.com Event Sourcing and CQRS with Marten 59


First, let’s try to solve the potential concurrency issues with <ProviderShift>(
optimistic concurrency, meaning that I’m going to start charting.ShiftId,
by telling Marten what initial version of the ProviderShift stream =>
stream the command thinks the stream should be at. If, at {
the time of saving the changes on the IDocumentSession, // validation code...
Marten determines that the event stream in the database
has moved on from that version, Marten throws a concur- var finished = new ChartingFinished();
rency exception and rollback the transaction. stream.AppendOne(finished);
});
Recent enhancements to Marten make this workflow much
simpler. The following code rewrites the Web service method This usage uses the database itself to order concurrent op-
above to incorporate optimistic concurrency control based erations against a single event stream, but be aware that
on the CompleteCharting.Version value that’s assumed to be this usage can also throw exceptions if Marten is unable to
the initial stream version: attain a write lock on the event stream before timing out.

public async Task CompleteCharting(


[FromBody] CompleteCharting charting, Summary
[FromServices] IDocumentSession session) Marten is one of the most robust and feature-complete tools
{ for Event Sourcing on the .NET stack. Arguably, Marten is
var stream = await session an easy solution for Event Sourcing within CQRS solutions
Strong vs. Eventual .Events because of its “event store in a box” inclusion of both the
Consistency .FetchForWriting<ProviderShift>( event store and asynchronous projection model within one
charting.ShiftId, single library and database engine.
Marten is unusual for an event
charting.Version);
store tool because it offers
the strongly consistent “inline” Event Sourcing is quite different from the traditional ap-
mode. // Validation code... proach of persisting system state in a single database struc-
ture, but has strengths that may well fit business domains
You need to be cognizant var finished = new ChartingFinished(); better than the traditional approach. CQRS can be done
of the differences and stream.AppendOne(finished); without necessarily having a complicated infrastructure.
potential problems with
using the strong versus await session.SaveChangesAsync();  Jeremy D. Miller
eventual consistency models. } 
Eventual consistency may
help your system scale by And, for another alternative, if you’re comfortable with a
removing work to background functional programming inspired “continuation passing
processes, but can lead to style” usage of Lambdas:
subtle bugs if your developers
aren’t careful.
return session
.Events
.WriteToAggregate<ProviderShift>(
charting.ShiftId,
charting.Version,
stream =>
{
// validation code...

var finished = new ChartingFinished();


stream.AppendOne(finished);
});

Optimistic concurrency checks are very efficient, assuming


that actual concurrent access is rare, because it avoids any
kind of potential expensive database locking. However, this
requires some kind of exception-handling process that may
include selective retries. That’s outside the scope of this ar-
ticle.

Because Marten is built on top of the full-fledged Postgresql


database, Marten can take advantage of Postgresql row
locking to wait for exclusive access to write to a specific
event stream. I’ll rewrite the code in the previous sample to
instead use exclusive locking:

return session
.Events
.WriteExclusivelyToAggregate

60 Event Sourcing and CQRS with Marten codemag.com


Advertisement

WANT TO LIVE
ON MAUI?
IF YOU CAN WORK FROM HOME,
WHY NOT MAKE PARADISE YOUR HOME?
The world has changed. Millions of people are working from home, and for many, that will continue
way past the current crisis. Which begs the question: If you can work from home, then why not
make your home in one of the world’s premiere destinations and most desirable living areas?

The island of Maui in Hawai’i is not just a fun place to visit for a short vacation, but it is uniquely
situated as a place to live. It offers great infrastructure and a wide range of things to do, not to
mention a very high quality of life.

We have teamed up with CODE Magazine and Markus Egger to provide you information about
living in Maui. Markus has been calling Maui his home for quite some time, so he can share his own
experience of living in Maui and working from Maui in an industry that requires great infrastructure.

For more information, and a list of available homes, visit www.Live-On-Maui.com

Steve and Carol Olsen


Maui, Hawai’i

codemag.com Title article 61


ONLINE QUICK ID 2209081

Putting Data Science into Power BI


Microsoft’s Power BI works as the ultimate power tool for data analytics. It lets you connect to many different data source
types (even within the same model) and then transform the connections into useful data tables. You can then use this
data to create DAX calculations, and build visuals to communicate model trends, outcomes, and key numbers. The main

languages of Power BI are M (in Power Query) and DAX. Data it can also guess whether you want to promote the first row
science is an area in the data analytics space focusing on mod- of the data table into the header position or, given enough
els like those that make predictions. Artificial intelligence is information, it can also guess an entire series of query steps
an area of data science that lets you use cognitive science to that you can see in the Applied Steps on the right.
recognize and act on patterns within the data points that you
have. Machine learning models are a subgroup of AI that in- Once you load the data into Power BI, you can explore sev-
volve using feedback loops to further improve the model. You eral visuals where you pick the visual, but it nudges you
can combine and use these data science models to create visu- toward the next step. For example, in the Model view, you
als and make forecasts and better decisions in the future. The might see the tables automatically joined together based on
Helen Wall three main languages of data science are SQL, R, and Python. how Power BI thinks the dimension and fact tables connect.
http://www.linkedin.com/ You should also check to make sure that it joins the tables
in/helenrmwall/ Given the power of Power BI and data science, how can you on the fields you want them to join on.
www.helendatadesign.com combine these two facets of data modeling together? The
data for this article focuses on economic and weather trends When you configure visuals, the decomposition tree and key
Helen Wall is a data science
consultant who founded in the greater Houston, Texas area. One data table contains influencers visuals use AI to predict the next step that you
Helen Data Design. She is employment numbers from the U.S. Bureau of Labor Statis- or the end user should take in analyzing the data in the
a power user of Microsoft tics (BLS Data Viewer at https://beta.bls.gov/dataViewer/ visual. You can also leverage the smart narrative visual for
Power BI, Excel, Tableau, view/timeseries/LAUCN482010000000004 and BLS Data the insights that Power BI automatically provides as to why
and AWS QuickSight. Her Viewer at https://beta.bls.gov/dataViewer/view/timeseries/ trends or metrics might occur.
primary driver behind work- LAUCN482010000000005). The other data table contains the
ing in these tools is finding daily high temperatures at the city’s Hobby Airport over the Connect to a Model
the point where data sci- last two years from the NOAA Climate Data Online (CDO) data Power BI also lets you connect to built-in algorithms directly
ence and design intersect. portal (https://www.ncdc.noaa.gov/cdo-web/). in the Power Query Editor as part of the transformation pro-
cess with the AI Insights options like Text Analytics, Vision, and
She is a LinkedIn Learn- Azure Machine Learning, like the options you see in the options
ing instructor for data Ways to Leverage Algorithms for the Add Column ribbon in Figure 1. In the Power Query Edi-
science courses focusing on in Power BI tor, for example, you can connect to models from Azure Cogni-
Power BI, AWS QuickSight, One way you can explore a combined framework with BI and data tive Services and Azure Machine Learning models built outside
Excel, R, and Python. She
science is through the capabilities of Power BI. To see the op- Power BI and Power Query. Examples of available Azure Cogni-
is also a lecturer at the
portunities for these algorithms, let’s divide the AI and machine tive Services models include Image Recognition and Text Analyt-
Rice University business
learning functionalities within Power BI into three categories. ics. Within Text Analytics, you can choose from algorithms like
school focusing on Python,
as well as an instructor at language detection, key phrase extraction, and score sentiment
Cornell University’s online • Those that Power BI automatically runs (which tells you the positive or negative tone of a text input).
certificate programs for data • Pre-built models you can connect to within Power BI
science and analytics using • Models you can build yourself using R, Python, or even DAX The fuzzy matching algorithm uses natural language process-
R and Excel. ing (NLP) to match together similar strings of text. If you’re
Power BI makes these algorithms available to you in the using Power BI dataflows in the Power BI service (either Pro
She has a double bachelor’s Power Query Editor once you load the data into Power BI or Premium accounts), you can connect to the cluster values
degree from the University Desktop through the modeling and visualization options. algorithm to transform the existing data table by either add-
of Washington where she Figure 1 shows the combined capabilities available within a ing a column or grouping the existing values in the grouped
studied math and economics single query for all three categories listed above. column together. Both functionalities use a very similar fuzzy
and was a Division I varsity matching algorithm to what you see in the merging func-
rower. (The real-life charac- Power BI Guesses tionality, except it only returns results on a single data table
ters from the book The Boys In Figure 1, you can see that Power Query automatically instead of combining two tables together. Fuzzy matching
in the Boat were Husky row- chooses the data type for each column in the existing query uses NLP to match together similar strings of text. You can
ers that came before her).
so far. You can change the data types yourself if it doesn’t al- configure the parameters for the matching within the fuzzy
She has a master’s degree
ready do this, or if the automatically selected data types don’t matching options for all three iterations of this algorithm.
in financial management
match with the data types you want to use. Power Query uses
from Durham University in
England. the first 1000 rows of data that appear in the table preview to Another example of an NLP model within a Power BI visual
make an educated guess for the actual data types. Similarly, is the Q&A visual, which lets you ask questions and get re-

Figure 1: Levels of coding in Power Query

62 Putting Data Science into Power BI codemag.com


sponses about the data. With the visualization options, you supports almost one thousand R packages, it only supports
can also connect to the built-in machine learning algorithms a few Python libraries. To use these languages directly in
directly to find clusters or anomalies in existing data points. Power BI, you need to do the following:
You can also use linear regression to find trend lines and fore-
casting to project the data trends into the immediate future. • Install R or Python on your own computer.
With any of these pre-built models, you’ll want to either know • Enable R and/or Python scripts to run in Power BI (and
or have an idea of the fields you want to use in the model. while you’re at it, enable Python scripts because I’ll
Even though you don’t have to build them yourself, it’s still discuss this later).
important to know what fields you can pass into the model as
parameters to get the outcome you’re looking for (and the 1. Once you upload reports as analysis to the Power BI ser-
requirements to make the models work properly). vice, it will run through the cloud instead of your computer.

Constructing Your Own Visuals


Finally, you can build your own visuals to represent the out- What Are You Looking For?
comes of these algorithms. One way you can do this is using cus- Build machine learning models through building scatter plot
tom visuals from the Power BI AppSource store. Many of these and line chart visuals as starting points for understanding
visuals use R behind the scenes to construct the visual, but you the high-level behavior of the data. Understanding algo-
don’t need to write any R code yourself. Examples of models rithms can seem intimidating but you can divide what you’re
supported by custom visuals using R include ARIMA, TBATS, looking for into three different goal categories.
clustering, and outliers. Power BI installs packages. Make sure
you’re using the right versions of the library packages. • Trends
• Groups
In addition to importing custom visuals that run R behind • Outliers or anomalies
the scenes in Power BI, you can also write R scripts directly
in the R and the Python visuals, as well as running scripts Trends
for both languages in the Power Query Editor. For the sake When you’re looking at data points, you want to see if there’s
of simplicity in this article, I’m going to use R as the sample a direction that they orientate in. For example, if you look
code, but you can absolutely use Python too. One of the at the time series trends for employment and unemployment
challenges I encounter is that although the Power BI service data by month on the left side of Figure 2, you can see that

Figure 2: Trend lines

codemag.com Putting Data Science into Power BI 63


Figure 3: Forecasting

Figure 4: Clustering with built-in options and clustering custom visual

64 Putting Data Science into Power BI codemag.com


since the beginning of 2020, the overall employment num- though, that the forecasting option only works on visuals
bers increased in the Houston area, while the overall unem- like line charts with a time-series field on the x-axis, like
ployment numbers decrease over the same period. You can dates. Within the formatting options, you can also change
add these dashed lines directly to several types of visuals, the length of the forecast, and whether to ignore some
including the line charts you see representing the time-series historical data points, and, most importantly, you can also
trends for both these metrics. To add these lines, turn them choose to include seasonality. Notice that the forecast op- Let Power BI Do as Much
on directly through the analytics options in the Visualizations tion adds both a line and a gray shaded area around it rep- of the Work as Possible
pane. The lines you see represent the outcomes of linear re- resenting the confidence interval.
gression modeling using ordinary least squares (OLS). If you Although it seems tempting
calculated this yourself, whether that’s through downloading Groups to try to gain full control over
the data to Excel, running an R or Python script on it, or even Another way you can apply machine learning algorithms the models by writing scripts
calculating it directly using DAX, you’ll get the same slope and to data points is by grouping them together. Examples of in R or Python for example,
intercept that you see on these charts (see Figure 2). clustering algorithms include names you might already you should aim to let Power
know like KMeans and hierarchical clustering. If you have BI do a lot of the heavy lifting
You can also see how linear regression looks on a scatter an existing scatter plot, let Power BI find the clusters for for you. An example of this
plot instead of a time series chart in the visual on the right you using the built-in clustering algorithm. This adds a new includes using built-in Power
Query functions to connect
of Figure 2. This models two variables against each other. clusters field to existing fields that contain the outcomes
to and transform the data.
You can see that as employment increases in Harris County, of the clustering model in the table that you choose to add
This means that Power BI
Texas, the numbers for unemployment also go down. Power them to. Within the clustering options, you can let Power might automatically perform
BI lets you add the trend line in the same way you could for BI automatically determine the number of clusters. You can a step for you (that you should
the line chart visual on the scatter plot. Again, it uses OLS also change them manually. In the scatter plot on the left in then check) or you can even
for linear regression to calculate the intercept and slope of Figure 4, you can see that the built-in clustering algorithm connect to an algorithm
this trend line. you add to the visual creates four clusters for the data. You like one for text analytics,
can also find clusters for more than two fields if you use a image recognition, or even
Let’s say you want to forecast the outcomes of time-series table visual instead of a scatter plot. clustering. Once you load
data into the future. You can do this through the forecasting the data into Power BI,
option available at the bottom of the same analytics pane On the right of Figure 4, you can see what the clustering al- you can also follow a very
as the trend line options that you see in Figure 3. Note gorithm gives you if you import the clustering custom visual similar approach.

Figure 5: Hierarchical clustering with R script

codemag.com Putting Data Science into Power BI 65


Figure 6: Clustering with outlier custom visual

from the Power BI AppSource store. This means that you


can add the ellipsis around the data points in the clusters # dataset <- data.frame(Label, Employment, Unemployment)
that the visual determines. This gives an example of an al- # dataset <- unique(dataset)
gorithm where the R script runs behind the scenes, but you # Paste or type your script code here:
don’t have to write the R code yourself for it to appear. You
can see that they also display in the green tooltip in the rownames(dataset) <- dataset$Label
highlighted visual. #determine row labels in final visual
distance <- dist(dataset[, c('Employment', 'Unemployment')]
Another way you can find clusters to group data points to- , diag = TRUE)
gether is using the hierarchical clustering algorithm. At this #2D distance between data points
point, you don’t have a visual or algorithm you can plug the hc <- hclust(distance)
data into to create the visual you see on the right in Figure #model hierarchical clustering
5, but instead, you can construct it using R code. plot(hc)
#create cluster dendrogram plot
First calculate the distances between data points on the left
of Figure 5 using the dist function on the data.frame data- You might also find it helpful to test out your code first on
set variable with the text labels removed. You then group an IDE like RStudio, which makes it easier to troubleshoot
each set together in pairs using the hclust R function. In issues. Although Power BI is an amazing tool, it’s also a bit
order for these visuals to properly display, use the standard limited in terms of ways to test out code before implement-
R visual in Power BI. Before you create these visuals directly ing it.
in Power BI Desktop, make sure you install R (or Python)
and then enable it directly in Power BI. Outliers or Anomalies
Finally, in data points, you want to determine whether
# The following code to create a dataframe and points are part of the rest of the data points or not, which
#remove duplicated rows is always executed and you can do through algorithms like outlier and anomaly de-
#acts as a preamble for your script: tection. In Figure 6, you can see the clustering with the

66 Putting Data Science into Power BI codemag.com


Figure 7: Outlier detection custom visual

outlier detection custom visual compared to the built-in lies for you, put a time-series field on the x-axis or it will What to Look for?
clustering algorithm. You can see the outliers denoted by gray out the algorithm in the analytics pane so you can’t
small gray Xs in the visual on the right that fall outside the access them. Although this might seem to
two clusters marked in teal and red. project the overall objectives
Notice that you can change the input parameters for finding of algorithms like those found
Grouping data together in clustering and determining the the anomalies by changing the sensitivity of the algorithm. in machine learning, on a high
points outside these clusters as outliers represents one way A higher sensitivity number makes the identified anomalies level, you’re looking to find
trends, grouping, and outliers
to find outliers, but there are other ways to do it. You can more sensitive to swings, which means that you’ll see the
and anomalies in data points.
also determine outliers using the outlier detection custom algorithm identify more data points as anomalies. If you’d
These don’t necessarily exist as
visual you see in Figure 7. This visual lets you separate the like to format the anomalies themselves, you can change
standalone things that you’re
outliers from the rest of the points (the main group) using their shape and color (from gray to orange like you see in looking for in data either. For
a z-score calculation to determine their sigma thresholds. this example). example, you can find trends
The farther out points represent the outliers in red and the and groups in data, and then
rest of the points aren’t part of the outlier group. Like the In the lower visual in Figure 8, you can see the outcomes the points that allow you to
clustering with outliers visual, the outlier detection visual of running an anomaly detection algorithm directly with easily find the points that are
also runs R code behind the scenes without you having to an R script. You can see it reflected in the outcome of a outliers or anomalies.
write any of it. standard stacked column chart visual where you use con-
ditional formatting so that orange can mark the anomalies
Besides outliers, anomalies are data points that also don’t while the rest of the dates display as a blue color. The al-
fit into the expected pattern of behavior for data points. gorithm itself can run as a standard R visual, but you can
On a high level, outliers represent deviations from where also use the R script integration options in the Power Query
you are, and anomalies represent deviations from where you Editor to add a column for the anomaly detection model in
should be. You can see the outcome of running the built-in Figure 1.
algorithm to find anomalies in the top line chart of Figure
8, which you access through the analytics pane of the se- With the applied step for running an R script in the list of
lected visual. For Power BI to automatically find the anoma- these steps on the right, the code below shows what this R

codemag.com Putting Data Science into Power BI 67


Figure 8: Anomaly detection

code looks like. Once you import the fpc library, you then Additional algorithms to explore include logistic regression,
SPONSORED SIDEBAR: set the seed so you can run the anomaly detection algorithm principal components analysis (PCA), classification, and
Need FREE Project with the dbscan function. Then assign the outcome results much more. I go into these topics in much greater depth
Advice? CODE Can cluster column as a new column in the existing dataset vari- in several of my LinkedIn Learning courses: https://www.
Help! able and assign the entire dataset to a new results variable linkedin.com/learning/instructors/helen-wall.
in Power Query.
No strings, free advice on  Helen Wall
a new or existing software # 'dataset' holds the input data for this 
development projects. # script
CODE Consulting experts
have experience in cloud, library(fpc) #loading package
Web, desktop, mobile, set.seed(220) #setting seed
microservices, containers, and results <- dbscan(dataset$High,eps=2,MinPts=1)
DevOps projects. Schedule dataset$Anomaly <- results$cluster
your free hour of CODE call
#add cluster to dataset data.frame
with our expert consultants
outcome <- dataset
today. For more information,
visit www.codemag.com/
consulting or email us at Why would you choose one approach for clustering or anom-
info@codemag.com. aly detection over another (for example, built-in algorithms
versus writing your own R code). There isn’t a single right
answer for this. You might want different levels of control
over the outcomes, or you might want to see a certain level
of efficiency or speed that one approach provides. Like with
everything else in data science, there isn’t one right ap-
proach for the way to do something, but rather a selection
of options to choose from.

68 Putting Data Science into Power BI codemag.com


shu
tter
stoc
k/A
run
as G
aba
lis

TIME TO
MODERNIZE YOUR
OLD SOFTWARE?
Is your business being held back by outdated software? We can help.
We specialize in updating legacy business applications to modern technologies.
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/modernize
832-717-4445 ext. 9 • info@codemag.com

codemag.com Title article 69


ONLINE QUICK ID 2209091

Getting Started with


Cloud Native Buildpacks
Cloud Native Buildpacks transform your source code into images that can run on any cloud. They take advantage of modern
container standards such as cross-repository blob mounting and image layer "rebasing," and, in turn, produce OCI-compliant
images. You use an image because it’s a lightweight, standalone, executable package of software that includes everything

you need to run an application: code, runtime, system li- The detect phase runs against your source code to deter-
braries, and settings. mine whether a buildpack is applicable or not. If it detects
an applicable buildpack, it proceeds to the build stage. If
When you tell Docker (or any similar tool) to build an im- the project fails detection, it skips the build stage for that
age by executing the Docker build command, it reads the specific buildpack.
instructions in the Dockerfile, executes them, and creates
an image as a result. Writing Dockerfiles that produce secure The build phase runs against your source code to download
and optimized images isn’t an easy feat. You need to know dependencies and compile your source code (if needed),
and stay updated about best practices or, if you're not care- and set the appropriate entry point and startup scripts.
Peter Mbanugo ful, you may create images that take a long time to build.
They may also not be secure.
p.mbanugo@yahoo.com
www.pmbanugo.me
Containerize a Node.js Web App
@p_mbanugo Rather than investing time in optimizing images, you may Let’s create an image for a Node.js WSb application. You’re
want to focus on the business logic of your software. For- going to build a minimal REST API using Node.js. I prepared
Peter Mbanugo is a techni-
cal writer and software tunately, there’s a tool that can read your source code and a starter repo at https://github.com/pmbanugo/fastify-
engineer who codes in output an optimized OCI compliant image. This is what Cloud todo-example, which you will fork and modify. Follow the
JavaScript and C#. Native Buildpacks can do for you. You can use this tool in steps below to clone and prepare the application:
He is the author of your software delivery process to automatically produce im-
“How to build a serverless ages without needing a Dockerfile. 1. Clone your fork of the repository.
app platform on Kubernetes”. 2. Check out the code-magazine branch.
He has experience working This article introduces you to Cloud Native Buildpacks and 3. Open the terminal and run npm install to install the
on the Microsoft stack shows you an example of how to use them in GitHub Ac- dependencies.
of technologies and also tions. By the end of the article, you’ll have a CI pipeline that 4. Open the project in your preferred code editor/IDE.
building full-stack applica- builds and publishes an image to Docker Hub.
tions in JavaScript. The project is a Web API built using a Fastify framework with
He’s a co-chair on NodeJS just one route. Try out the application by opening the ter-
Nigeria, a Twilio Champion, What Are Cloud Native Buildpacks? minal and running the command npm start. The applica-
and a contributor to the Cloud Native (technologies that take full advantage of the tion should start and be ready to serve requests from loc-
Knative open-source proj- cloud and cloud technologies) Buildpacks are pluggable, alhost:3000. Open your browser to localhost:3000 and you
ect. You can find his OSS modular tools that transform application source code into should get a JSON response, as depicted in Figure 1.
contributions at github. container images. Their job is to collect everything your app
com/pmabnugo.
needs to build and run. Among other benefits, they replace You want to modify the response so that the JSON data in
When he isn’t coding, Dockerfile in the app development lifecycle, enable swift todo.json is returned. Open server.js and replace reply.
he enjoys writing the rebasing of images, and provide modular control over im- send({ hello: "world" }) on line 7 with the code below:
technical articles that you ages (through the use of builders).
can find on his website const data = Object.entries(todos)
or other publications, How Do They Work? .map((x) => x[1]);
such as on Pluralsight Buildpacks examine your app to determine the dependen- reply.send(data);
and Telerik. cies it needs and how to run it, then packages it all as a
runnable container image. Typically, you run your source Restart the server and open localhost:3000 in the browser.
code through one or more buildpacks. Each buildpack goes You should now get a list of todo items returned as a JSON
through two phases: the detect phase and the build phase. array, as shown in Figure 2.

Building and Running a Container Image


Let’s build a container image of the Node.js Web app and
run it locally. You don’t need a Dockerfile; instead you’ll use
the pack CLI to build the image and Docker to run the con-
tainer. If you don’t have Docker installed, go to docker.com
to download and install Docker Desktop. You can install the
pack CLI using Homebrew by executing the command brew
install buildpacks/tap/pack. If you don’t use Homebrew,
you can find more installation options at https://build-
Figure 1: The JSON response packs.io/docs/tools/pack/#install.

70 Getting Started with Cloud Native Buildpacks codemag.com


Figure 2: The todo items returned as a JSON aray

Open your terminal and run the command pack build todo-
fastify --builder paketobuildpacks/builder:base to build a
container image using paketobuildpacks/builder:base as
the builder image. The builder is an image that contains
all the components necessary to execute a build, which in-
cludes the buildpacks and files that configure various as-
pects of the build. If you look through the output of the
command, you should notice that during the detect phase,
six buildpacks were detected to take part in the build phase
(see Figure 3). These six buildpacks are then used to build
and export an image.

After the image is built, you’ll run it using Docker. Run the Figure 3: The six buildpacks have been detected.
command docker run -d --rm -p 8080:3000 todo-fastify
to start the container and open localhost:8080. It should
return the same JSON array as you get when running it with- Now that you’ve modified the code, you need to rebuild the im-
out Docker. Stop the container using the command docker age and run the container to test that the application still works.
stop CONTAINER_ID. Replace CONTAINER_ID with the value Open your terminal and run the command pack build todo-fasti-
that was returned when you started the container. fy --builder paketobuildpacks/builder:base to build the image.
You should notice that the second build (and subsequent builds)
are much faster because the images needed for the build pro-
Rebuilding The Image cesses were downloaded and cached in the initial run.
You’re going to add another route that returns an item
based on its key. Open server.js and add the code snippet Now run the command docker run -d --rm -p 8080:3000
below after line 10. todo-fastify to start the container. Open http://local-
host:8080/1 in your browser. You should get a JSON re-
fastify.get("/:id", function (request, reply) { sponse similar to what you see in Figure 4.
const data = todos[request.params.id];
reply.send(data);
}); Building an Image from a CI Pipeline
You can build images in your continuous integration pipeline
The new route gets the id from the request params, uses using Cloud Native Buildpacks. With GitHub Actions, there's
it to get a specific item from the todos object, and then a Pack Docker Action (https://github.com/marketplace/ac-
returns the item as JSON. tions/pack-docker-action) that you can use. When you com-

codemag.com Getting Started with Cloud Native Buildpacks 71


Figure 4: The JSON response

bine it with the Docker Login Action, you can build and pub- and publishes the image. The Pack Build step uses the
lish to a registry in your workflow. There's a similar process dfreilich/pack-action action to build the application and
on GitLab using GitLab's Auto DevOps, and you can read about publish the image to the Docker registry. This action uses
it on https://docs.gitlab.com/ee/topics/autodevops/stages. the Pack CLI behind the scenes, which, in turn, depends on
html#auto-build-using-cloud-native-buildpacks. Docker to build and publish to a registry.

I included a GitHub Actions workflow as part of the start- The args supplied to dfreilich/pack-action tells it to run the
er files in the repository you forked. You’ll find it in the build command using the paketobuildpacks/builder:base
.github/workflows/publish.yaml file. The workflow builds builder image. The --publish flag instructs the pack CLI to
an image and publishes it to Docker Hub whenever you push publish to the registry after the build process is complete.
new commits to your GitHub repository.
The Docker login step needs a DOCKERHUB_TOKEN secret.
Let’s take a look at the publish.yaml file to understand what Go to Docker Hub and create an access token. Then add a
it does. GitHub secret named DOCKERHUB_TOKEN with its value set
to your Docker Hub’s access token.
The build-publish job defines two environment variables.
Now commit your changes and push your commits back to
env: your GitHub remote. You should see the workflow run and
USERNAME: '<USER_NAME>' when it’s done, the image should be in your Docker registry
IMG_NAME: 'todo-fastify' repository.

IMG_NAME holds the name of the image, in this case, called Builder and Buildpacks
todo-fastify. The USER_NAME variable is the Docker reg- A builder is an image that contains buildpacks and the de-
istry’s namespace where the image is stored. Replace the tection order in which builds are executed. There are dif-
value with your Docker Hub username. ferent buildpacks from different vendors that you can use,
such as those from Heroku and Google. Use the links below
There are four steps in this job, namely Checkout, Set App to check out some available builders and buildpacks:
Name, Docker login, and Pack Build:
• Heroku: hub.docker.com/r/heroku/buildpacks
- name: Checkout • Google: github.com/GoogleCloudPlatform/buildpacks
uses: actions/checkout@v2 • Paketo: paketo.io/docs/concepts/builders/
- name: Set App Name
run: 'echo "IMG=$(echo ${USERNAME})/ Visit www.buildpacks.io if you want to read more about
$(echo ${IMG_NAME})" >> $GITHUB_ENV' Cloud Native Buildpacks.
- name: Docker login
uses: docker/login-action@v1
with: Conclusion
username: ${{ env.USERNAME }} I’ve shown you how to build images locally using the pack
password: ${{ secrets.DOCKERHUB_TOKEN }} CLI, and also how to use it within GitHub Actions. You need a
- name: Pack Build builder to build an image, and you used paketobuildpacks/
uses: dfreilich/pack-action@v2 builder:base as the builder image.
with:
args: 'build ${{ env.IMG }} --builder  Peter Mbanugo
paketobuildpacks/builder:base --publish' 

The Checkout step clones and checks out the branch. Af-
ter that, the Set App Name step adds a new environment
variable named IMG. The value is formed by concatenating
USERNAME and IMG_NAME variables.

The Docker login step authenticates the workflow run


against the Docker registry because the final step builds

72 Getting Started with Cloud Native Buildpacks codemag.com


CODE COMPILERS

(Continued from 74) balance for the other roles. For example, the CCO
would be a check and balance on the CIO/CTO
patterns (and conform where necessary) to spe- to ensure that the Devops scheme supports the
cific facts and circumstances. Organizations, in representations made by the CFO in the organiza- Sep/Oct 2022
my opinion, aren’t—or at least shouldn’t be—any tion’s public reporting. Volume 23 Issue 5
different.
Invariably, any technical consulting engagement Group Publisher
In any successful consulting engagement, strat- will touch on one or more of these areas. How an Markus Egger
egy and tactics must meet somewhere in the organization makes decisions and evaluates op- Associate Publisher
middle. Leadership creates the policy that the tions, irrespective of specific technology is, in my Rick Strahl
rank-and-file staff must execute. And good and opinion, the primary determinant of whether the Editor-in-Chief
actionable policy requires leadership. The ben- recommendations we make will be successful. A Rod Paddock
efits we strive to achieve for our client’s benefit secondary determinant is the organization’s rec-
Managing Editor
can only be as good as the client’s ability to take ognition of the work it must undertake to make Ellen Whitney
our recommendations. our recommendations feasible.
Contributing Editor
John V. Petersen
As Figure 1 illustrates, the central unifying role The successful consultant makes the determi-
that touches all other roles is the chief execu- nation early on where the organization’s matu- Content Editor
Melanie Spiller
tive (in black). Organizations are run by people. rity level is. It’s through this recognition that
As much as we would like to have clear consen- consulting delivers value, truly a partnership Editorial Contributors
sus from the group, there often needs to be one wherein both parties exert equal effort. And Otto Dobretsberger
Jim Duffy
person who makes the call. Ideally, that call is quite often, before the recommendations may Jeff Etter
informed by the input of the other “chiefs.” The be implemented, there may be other preparatory Mike Yeager
ones in yellow on my model are primarily split be- work required that may require the work of other
Writers In This Issue
tween Administration and Operations: the chief consulting organizations. Successful consultants Joydip Kanjilal Vassili Kaplan
administrative officer (CAO) and chief operations gladly pass on clients that don’t understand or Sahil Malik Peter Mbanugo
officer (COO). appreciate that partnership equation. Jeremy Miller John Petersen
Paul D. Sheriff Helen Wall
Shawn Wildermuth
An organization’s administrative function is con-  John V. Petersen
cerned with what gets done. Operations is con-  Technical Reviewers
Markus Egger
cerned with how things get done. The technical Rod Paddock
counterpart that supports the what and the how
Production
are the chief information officer (CIO) and the Friedl Raffeiner Grafik Studio
chief technical officer (CTO). The CIO sets forth www.frigraf.it
the technical strategy of what’s to be done. The Graphic Layout
CTO sets forth how that strategy is to be execut- Friedl Raffeiner Grafik Studio in collaboration
with onsight (www.onsightdesign.info)
ed.
Printing
The third category of roles, in green, sets forth Fry Communications, Inc.
800 West Church Rd.
the four major functions I’ve identified as being Mechanicsburg, PA 17055
common to any successful organization. The chief
human resources officer (CHO) is all about the Advertising Sales
Tammy Ferguson
health, well-being, and development of an orga- 832-717-4445 ext 26
nization’s personnel. An organization can only be tammy@codemag.com
as good as its people, and more specifically, how
Circulation & Distribution
it treats its people. The chief compliance and le- General Circulation: EPS Software Corp.
gal officer (CCO) is the one who makes sure the Newsstand: American News Company (ANC)
rules, whether they be internal, external, legal,
Subscriptions
or regulatory are followed. The chief financial Subscription Manager
officer (CFO) is concerned with capital. How are Colleen Cade
projects financed? Will it be through organic ccade@codemag.com
growth and internally generated cash? Or will it US subscriptions are US $29.99 for one year. Subscriptions
come by way of external sources like debt or eq- outside the US are US $50.99. Payments should be made
uity investment? Finally, there’s the chief market- in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards accepted.
ing officer (CMO). It isn’t enough to have great Back issues are available. For subscription information,
ideas, products, and services. The world needs to e-mail subscriptions@codemag.com.
know about them to purchase them!
Subscribe online at
www.codemag.com
The foregoing list of roles is by no means exclu-
sive. The model is simply my conception of how a CODE Developer Magazine
6605 Cypresswood Drive, Ste 425, Spring, Texas 77379
canonical organization should be organized. It’s Phone: 832-717-4445
far more important that each role be given its
proper due. Organizations best situated to grow
and improve through consulting have these roles
and they don’t operate in a vacuum, they oper-
ate cooperatively, and they act as a check and

codemag.com CODA: On Consulting and Organizations 73


CODA

On Consulting and
Organizations
Many articles have been written about modern design, architectures, and languages. What about
modern consulting and organizations? By “modern,” I mean to convey a notion of what’s needed today,
perhaps at the expense of what would have been regarded as sound practice yesterday. Just as we

must continually examine and refine our techni- Think of the last time your customer or employer confronted with the tension when the person you
cal platforms, so too must we do the same for decided we want to be Agile, without any real need to work with is worried that your consult-
how we approach problem solving. If the ends are idea of how to get there. Strategy without tac- ing engagement is a threat to their job? The two
the technical solutions we build, then the means tics or a plan may very well be aspirational. But things you must accept in such cases are that
to that end is found in consulting services. without any appreciation for the work required to people are going to believe what they believe and
create the plan and work the plan, the aspiration it isn’t the consultant’s job to fight that battle.
Consulting, in recent years, has become a loaded is nothing more than a pipe dream. All you can do is carry out the engagement with
term, meaning different things to different peo- fidelity and professionalism.
ple. At one end of the spectrum is the journey- A consultant’s job, in part, is to steer clear of
man contract programmer. At the other end of pipe dreams. I’m often reminded of the following To that end, we must know and understand the
the spectrum are global firms like KPMG and PWC passage in The Pragmatic Programmer by Andrew client. A good first step toward that knowledge
that offer a wide range of services. Both ends of Hunt and David Thomas: is in knowing and understanding how the orga-
the spectrum are referred to as consultants. Is nization is led. There’s an old saying that a fish
such broad application of the term “consultant” A tourist visiting England's Eton College asked the rots from the head down. The same can be said
correct? The answer entirely depends on the ser- gardener how he got the lawns so perfect. "That's of an organization, which can only be as good as
vices provided. easy," he replied. "You just brush off the dew ev- its leadership. Taking a standards-based patterns
ery morning, mow them every other day, and roll and practices approach, I employ the following
If we’re providing advisory services such that re- them once a week." "Is that all?" asked the tourist. model and then apply the client’s specifics to the
quested recommended courses of action are the "Absolutely," replied the gardener. "Do that for 500 model:
product, then yes. But if all we’re doing is provid- years and you'll have a nice lawn, too."
ing labor, then no. In other words, if all we’re
doing is slinging code at the client’s direction, A responsible consultant honestly conveys to
that’s staff augmentation, not consulting. The their client their obligations as freely as the ben-
client has defined the problem and the solution. efits the client hopes to realize. Advice, as great
The need we’re filling in such cases is the labor and as well thought out as it may be, will only
to implement the solution. If at any point, advice be as good as the client’s ability to implement
is sought in how to solve a given problem, that’s that advice.
consulting. The latter presents a case where we
bring our experience and skill to render affirma-
tive recommendations for the given context.
Good Consulting Starts
with KYC
The water’s edge to good consulting cannot just KYC stands for “know your client.” How are they
be “it depends.” With consulting, the primary organized? What are their values? What are their
work product includes the recommendations and strengths and weaknesses? How committed are
the basis for making them. Such recommenda- they to achieving their aspirational goals? How
tions must be grounded, actionable, and feasible. open are they to change? To know your client
By grounded, I mean based on the reality of con- means to get into the weeds and embed with
straints. them. If there’s a shop floor, don a hardhat and
walk the floor. For every line of code proposed
Anything that isn’t grounded is the stuff of as- to be written, will you and your team know how Figure 1: A standard “C-suite” model
pirations. Aspirations are good things because that code furthers the organization’s goals and
they present the better place we wish to be in the objectives?
future. Once upon a time, I aspired to be a law- The model is my conception of the prototypical C-
yer. To be a lawyer, it meant I needed to pass the Of course, every client organization isn’t a mono- suite, in terms both of roles and in relationships
bar examination. To pass the bar examination, it lith. Understanding what the organization does to one another. If we’re to rely on the empirical
meant I had to take the bar examination, which by the “Collective whole” is only one part of the evidence of past engagements and then apply
meant I had to be eligible to take the exam. To be equation. How organizations run, that’s a func- that experience to the current engagement, there
eligible to take the bar examination in most juris- tion of their people. must be a constant. In our technology solutions,
dictions, I must have graduated from an accred- constants are patterns and practices around cod-
ited law school. In other words, to achieve that Organizations, after all, are run by people, each ing, design, and architecture. We apply those
aspirational goal, three things were required: a with their own agendas, strengths, weaknesses,
plan, time, and work. biases, and opinions. How often have you been (Continued on page 73)

74 CODA: On Consulting and Organizations codemag.com


UR
GET YO R
OU
FREE H

TAKE
AN HOUR
ON US!

Does your team lack the technical knowledge or the resources to start new software development projects,
or keep existing projects moving forward? CODE Consulting has top-tier developers available to fill in
the technical skills and manpower gaps to make your projects successful. With in-depth experience in .NET,
.NET Core, web development, Azure, custom apps for iOS and Android and more, CODE Consulting can
get your software project back on track.

Contact us today for a free 1-hour consultation to see how we can help you succeed.

codemag.com/OneHourConsulting
832-717-4445 ext. 9 • info@codemag.com
shutters
tock/Lu
cky-pho
tograp
her
NEED
MORE OF THIS?

Is slow outdated software stealing way too much of your free time? We can help.
We specialize in updating legacy business applications to modern technologies.
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/modernize
832-717-4445 ext. 9 • info@codemag.com

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy