0% found this document useful (0 votes)
55 views76 pages

CODE 4-2022 Web

Uploaded by

Anna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views76 pages

CODE 4-2022 Web

Uploaded by

Anna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Laravel, Grafana, PHP, TypeScript

JUL
AUG
2022
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

An Introduction
to TypeScript

Build Web Create Explore High-


Apps with PHP Dashboards Performance C#
and Laravel with Grafana
TABLE OF CONTENTS

Features
8 S implifying ADO.NET Code 64 The Excellent Schemer
in .NET 6: Part 1 With Accelerate for Excel in Microsoft 365, Bob finds that
the Scheme tool makes writing user-defined functions easy.
Paul starts a new series about creating wrappers to limit the amount Bob Calco
of code you need to write when you’re working with ADO.NET in .NET 6.
Paul D. Sheriff

20 Writing High-Performance
Code Using Span<T> Columns
and Memory<T> in C# 74 CODA: Agility’s Primary Concern:
C# 7.2 has introduced two new types: Span and Memory.
Joydip dives in and finds that they’re incredibly useful. Content Distribution
Joydip Kanjilal Not everyone understands the true meaning of running an
agile shop. John reviews how to keep focus and get the most
27  uilding MVC Applications
B from this philosophy.
John V. Petersen
in PHP Laravel: Part 2
Last time, Bilal looked at Models—the M in MVC. This time,
he explains about Views and Controllers (the V and the C) and
how to take advantage of them in PHP Laravel.
Bilal Haidar Departments
38 TypeScript: An Introduction 6 Editorial
Everyone’s using JavaScript. That’s terrific, but it has its limitations.
Shawn shows you how to overcome those limitations using TypeScript.
Shawn Wildermuth 17 Advertisers Index
46 Developing Dashboards 73 Code Compilers
Using Grafana
Wei-Meng explores creating dashboards using Grafana, a great tool
for creating charts and other visual presentations of your data.
Wei-Meng Lee

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $50.99 USD. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Back issues are available. For subscription information, send e-mail to subscriptions@codemag.com or contact
Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.

4 Table of Contents codemag.com


EDITORIAL

Longevity by Design
Early last month, I began working on a set of applications that may very well be my “last” applications
as a software engineer. I put the “last” in quotes for a reason. Technically these applications will not be
the LAST applications I work on but they will outlast me as a software engineer. Here’s why I started to

ponder this: Many of the applications I’ve worked cleaner and better code, and older applications Once I felt that we could achieve our look and feel
on have been used in one form or another for have what I like to call “seasoning,” meaning using Bootstrap, I moved on to the most important
many years after their initial development. These that their original features may have been aug- aspect of the application: The system architecture.
applications I’m starting on now will be around mented, replaced, or sometimes removed.
long after I retire as professional software engi- Using the same discovery process, I chooe one part
neer. After over 30 years in this business, it never Applications are living, breathing entities and of the application (a simple lookup table/screen)
ceases to amaze me just how long our applica- they need to be accorded respect for their lon- and built out the requisite features needed by the
tions live. gevity. Also, the longer an application is used, its system. This includes CRUD operations, validation
value to an organization likely grows. rules, service architecture, unit tests, integration
I have a little game I play when describing the tests, etc. Keep in mind that I didn’t build all of this
age of the applications. I envision my applica- It's this longevity that we must always consider from scratch. My team has around 10 years of code
tions as my children who have their own ages and as developers. For every application I build, I try that I was able to pull the best code from. Once I
personalities. I discuss each application in terms to keep in mind that these applications live on was happy with this architecture, I continued my
of childhood development. Some are toddlers, and on and on. I try not to take shortcuts. Or discovery with other system entities. I built around
some are in elementary school, some in middle when I do take a shortcut (knowing that reality half a dozen features and I saw that there’s a lot
school, some in high school and in one case, if always trumps theory when deploying applica- of boiler plate code that could be carved from the
the application had been a real person, I could’ve tions), I try to clean up after myself. system with a few carefully constructed classes. So
sat down for a beer with it before it was replaced I refactored the entire application and reduced the
by a younger sibling. To illustrate this, I’d like to discuss the new ap- lines of code by hundreds. Although no new fea-
plications I mentioned in my introduction. These tures were added, the architecture of the system
It may seem trivial to assign ages to applications are green field applications, and I’m doing my best improved greatly and will hopefully take us a long
as though they’re children. I disagree. This as- to ensure their success in the long term. I started way forward in the construction of this application.
signment of age gives a certain perspective to these projects using a discovery-based process, not
the general condition of an application. Gener- in architecture but in design. I know that these ap- Now, I could have just said “meh” that’s good
ally (with a capital G), newer applications have plications need a reasonably polished look and feel. enough, let’s move on. But I had the nagging
Rather than reinvent- feeling that this would insure our long-term suc-
ing the wheel, I start- cess. I’m hoping that this application lives on so
ed with Bootstrap 5 long that I can celebrate it with a beer when it’s of
as my design frame- age. Which gets me back to my original premise.
work. I like Bootstrap
because it’s a well- Let’s discuss the application I joked about having a
designed layout tool beer with. That application was built in Visual Fox-
with a broad depth of Pro around 1998/1999 against a beta build of SQL
adoption and knowl- Server 7.0. That application lived until 2019, when
edge surrounding it. it was replaced by a shiny new WPF application.
Bootstrap, like most This application has a newer architecture, is fully
frameworks, comes unit and integration tested, and is likely to be a
with its own set of critical application for that client for the next 20+
constraints and I years. Here’s the funny part: I was a bit sad that
wanted to make sure this application was replaced until I realized that
that it can support a TON of the code is baked into SQL Server stored
our look-and-feel procedures and some applications that are just
goals. Using the now hitting the 20-year mark. So that application
discovery method, I and some of its DNA lives on and, given its age, is
created a new form just about to be able to rent a car, put on some
and went to work sunglasses, and ride off into the sunset.
using Bootstrap to
build a page called
Standards. Figure 1  Rod Paddock
demonstrates the re- 
sults of this discovery
process.

6 Editorial codemag.com
CUSTOM SOFTWARE DEVELOPMENT
STAFFING TRAINING/MENTORING SECURITY

MORE THAN JUST


A MAGAZINE!
Does your development team lack skills or time to complete all your business-critical software projects?
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/code
832-717-4445 ext. 9 • info@codemag.com

codemag.com
ONLINE QUICK ID 2207021

Simplifying ADO.NET Code


in .NET 6: Part 1
When developers think of how to access data, many use the Entity Framework (EF), Dapper, NHibernate, or some other
object-relational mapper (ORM). Each of these ORMs use ADO.NET to submit their SQL queries to the back-end database.
So, why do many developers use ORMs instead of just using ADO.NET directly? Simply put, ORMs allow you to write.

If each of these ORMs are simply wrappers around ADO.NET, dynamic SQL or stored procedures. You’re going to be able
can you write your own wrapper to cut down the amount of to retrieve the identity value generated from SQL Server,
code you need to write? Absolutely! This series of articles handle data validation using the .NET data annotations, and
shows you how to create a set of reusable wrapper classes to add your own custom validation code. Finally, you’ll create a
make it simpler to work with ADO.NET in .NET 6. way to get excellent exception information to help you track
down any potential problems quickly and easily.
In these articles, you write code to map columns to
properties in a class just like the ORMs do. You check for
attributes such as [Column] and [Key] on class properties Using the ADO.NET Wrapper Classes
Paul D. Sheriff and use those attributes in your data application. You build Once you build a few reusable classes that you’ll learn about
http://www.pdsa.com SQL statements from your classes and submit that SQL to in this article, all you’ll need are an entity class, a search
the database efficiently and safely. By the end of these class, and a repository class for each table in your database.
Paul has been in the IT articles, you’re going to have a design pattern for typical Let’s look at some typical code you’re going to have to write
industry over 35 years. In CRUD applications that’s fast and requires less ADO.NET to retrieve rows from a table.
that time, he has success- code, just like the most popular ORMs out there. Besides
fully assisted hundreds
learning about ADO.NET, these articles teach you how to Get All Rows
of company’s architect
create a set of generic classes and methods. By the end of Once you have a “Database Context”, a “Repository”, and
software applications to
these articles, you’ll have some insight into how many ORMs an “Entity” class built, you can retrieve all rows in the table
solve their toughest busi-
ness problems. Paul has work. by using code shown in the following code snippet. Now,
been a teacher and mentor you must admit, this is very simple code and right one par
with what you’d write when using an ORM such as the Entity
through various mediums What’s in These ADO.NET Framework.
such as video courses,
blogs, articles and speak- Wrapper Classes
ing engagements at user The classes you’re going to create in these articles are not using AdvWorksDbContext db = new(connectString);
groups and conferences intended to be an ORM. Instead, they are going to help you
around the world. perform standard CRUD operations in an efficient manner List<Product> list = db.Products.Search();
Paul has many courses in and with less code. The classes you’re going to learn
the www.pluralsight. about will perform the same, and sometimes better, than // Display the Results
com library (http://www. the corresponding ORMs because there is less overhead. foreach (var item in list) {
pluralsight.com/author/ Another advantage of using the classes described herein is Console.WriteLine(item.ToString());
paul-sheriff) on topics that you’re writing straight-forward C# code and you don’t }
ranging from .NET 6, LINQ, have to learn any new configuration or tooling. You’re going
JavaScript, Angular, MVC, to make these classes generic so that you can use them with In the code above, create an instance of a database context
WPF, ADO.NET, jQuery, any .NET data provider such as SQL Server or Oracle. class, named AdvWorksDbContext, within a using so the
and Bootstrap. Contact connection is closed and disposed of properly after the data
Paul at psheriff@pdsa.com.
Read Operations is read. Within the AdvWorksDbContext class, you expose an
The classes you’re going to learn about in these articles instance of a ProductRepository class as a property named
will perform standard CRUD operations. For reading data, Products. Pass in a connection string to the constructor of
you can submit dynamic SQL, call views, or invoke stored the AdvWorksDbContext class. This connection string is used
procedures to return data and have that data automatically by the base class to create a connection to the database.
mapped to C# classes. Here’s what your database context class is going to look
like.
Standard data annotations such as the [Column] attribute
can be used to map a column to a property name public class AdvWorksDbContext :
that’s different from the column name. You can use the SqlServerDatabaseContext {
[NotMapped] attribute so you can add properties that public AdvWorksDbContext(string connectString)
have nothing to do with the table. You’re going to create : base(connectString) {
a custom attribute called [Search] to help you handle many }
searching scenarios in a generic manner.
public override void Init() {
Modify Operations base.Init();
These wrapper classes also allow you to submit action Products = new(this);
queries such as INSERT, UPDATE, and DELETE using either }

8 Simplifying ADO.NET Code in .NET 6: Part 1 codemag.com


classes you need to connect to a database and to retrieve
public ProductRepository Products { get; set; } and modify data to/from a table.
}
Get the AdventureWorksLT Database
Search for Specific Rows For these articles, I’m going to use the AdventureWorksLT
If you wish to search for specific rows within a table, you database, so you need to install this database into your SQL
need only create a class with properties set to the values Server. You can get the database and instructions at this link
to search for. The code shown in Listing 1 allows you to https://bit.ly/3xi1Tzg. You can also get a recent database
search for data in the SalesLT.Product table from the backup and the appropriate .SQL files to create the tables at
AdventureWorksLT database. This code returns a list of my GitHub located at https://github.com/PaulDSheriff/
product objects that represent each row found that matches AdventureWorksLT. Once you have the AdventureWorksLT
the criteria. database installed into your SQL Server, you’re ready to start
building the classes to access the Product table within that
The Search() method has two overloads, one of which is database.
passed an instance of a ProductSearch class. The Name
property in the ProductSearch is set to the value “C” and
the ListPrice is set to 50. The ProductSearch class looks like The Basics for Building an Entity List
the following: To read data from a table in ADO.NET, you need to use three
different classes. First you need to create a SqlConnection
public class ProductSearch { object with the appropriate connection string to your
[Search(“LIKE”)] database. Place that SqlConnection object into the
public string Name { get; set; } Connection property of a SqlCommand object and set the
[Search(“>=”)] CommandText property to the SQL to submit. Finally, call
public decimal? ListPrice { get; set; } the ExecuteReader() method on the SqlCommand object to
} create an instance of a SqlDataReader object. Loop through

A custom [Search] attribute (which you’ll build in the next


article) specifies what operator to use when building the Listing 1: A sample of how to search for data using the ADO.NET wrapper you’re going to
WHERE clause to submit to the database. The Search() create in this article
method generates a SELECT statement with a WHERE clause using AdvWorksDbContext db = new(connectString);
when a search object is passed in as shown in the following // Search for products that
code snippet. // Start with ‘C’ AND ListPrice >= 50
List<Product> list = db.Products.Search(
SELECT * FROM SalesLT.Product new ProductSearch() {
WHERE Name LIKE ‘C%’ AND ListPrice >= 50; Name = “C”,
ListPrice = 50
});
Obviously, you’re going to need a way to add, edit, and
delete rows of data within a table. More code will be added // Display the Results
to this class to perform the other CRUD functionality as foreach (var item in list) {
Console.WriteLine(item.ToString());
you work your way through these articles. All you need is }
an entity class to map to your table, a search class, and
a repository class for each table. You then expose each
repository class from the database context class, and you’re Listing 2: An entity class has one property for each column in your table
ready to write the code shown in Listing 1.
#nullable disable

Getting Started using System.ComponentModel


.DataAnnotations.Schema;
To really learn how to use these ADO.NET wrapper classes,
I recommend that you follow along as I go through the namespace AdoNetWrapperSamples.EntityClasses;
steps in this article. To start, create a .NET 6 console [Table(“Product”, Schema = “SalesLT”)]
application and download the AdventureWorksLT SQL Server public partial class Product
database. {
public int ProductID { get; set; }
public string Name { get; set; }
Build a .NET 6 Console Application public string ProductNumber { get; set; }
Open Visual Studio 2022 and click on the Create a public string Color { get; set; }
new project option. Locate the template for a Console public decimal StandardCost { get; set; }
App that runs on .NET on Windows, Linux, and macOS. public decimal ListPrice { get; set; }
Highlight that template and click the Next button. Type in public DateTime SellStartDate { get; set; }
public DateTime? SellEndDate { get; set; }
AdoNetWrapperSamples for the Project name property. Click public DateTime? DiscontinuedDate
the Next button and make sure you choose .NET 6.0 (Long- { get; set; }
term support) for the Framework property.
public override string ToString() {
return $”Product Name: {Name} -
Right mouse-click on the AdoNetWrapperSamples project Product ID: {ProductID} -
and select Manage NuGet Packages… Click on the Browse List Price: {ListPrice:c}”;
tab, type in System.Data.SqlClient, and install this package }
into your project. This package provides all the ADO.NET }

codemag.com Simplifying ADO.NET Code in .NET 6: Part 1 9


Listing 3: Extension method to handle null values in the data reader the rows in the data reader and each time through, build
an instance of a Product object and populate the properties
#nullable disable with the columns from the Product table you’re reading.
using System.Data;
The Product Class
namespace AdoNetWrapper.Common; In the AdventureWorksLT database, there’s a Product table
in the SalesLT schema. To retrieve the data from that table
public static class DataReaderExtensions {
public static T GetData<T>(this IDataReader dr, and create an instance of a Product object for each row,
string name, T returnValue = default) { create a Product entity class as shown in Listing 2. I’m not
using all the columns in the table, just enough to illustrate
var value = dr[name]; the concept of creating a collection of Product objects.
if (!value.Equals(DBNull.Value)) {
returnValue = (T)value; Right mouse-click on the project and add a new folder named
} EntityClasses. Right mouse-click on this EntityClasses folder
and add a new class named Product. Add the code shown
return returnValue;
}
in Listing 2. Please note that due to space limitations in
} the magazine, the string in the ToString() method is broken
across multiple lines. You’ll need to put that string all on
one line.

Listing 4: The Search() method is the public API to perform searching of records The DataReaderExtensions Class
public virtual List<Product> Search One of the challenges you may encounter when reading data
(string connectString, string sql) { from a table, is a null value in columns. When a null is read
List<Product> ret; from a data reader, .NET interprets it as a DbNull object.
You can’t put this value into a .NET data type, even if it’s
// Create a connection
using SqlConnection cnn=new(connectString); a nullable type. So, the best thing to do is to create an
extension method, named GetData<T>(), to handle these
// Create a command object DbNull values. For example, if your reader variable is named
using SqlCommand cmd = new(sql, cnn); rdr, call the GetData() method passing in the data type
// Open the connection
cnn.Open();
that you wish the data to be converted to as shown in the
// Create a data reader within a using following code snippet.
// so it is closed and disposed of properly
using SqlDataReader rdr = cmd.ExecuteReader ProductID = rdr.GetData<int>(“ProductID”);
(CommandBehavior.CloseConnection); Name = rdr.GetData<string>(“Name”);
// Build the collection of entity objects
ret = BuildEntityList(rdr); Right mouse-click on the project and add a new folder
named Common. Right mouse-click on this Common folder
return ret; and add a new class named DataReaderExtensions. Add the
}
code shown in Listing 3 to this file.

Create a ProductRepository Class


Listing 5: The BuildEntityList() is responsible for loading data into an entity object Now that you have a Product class to represent the Product
protected virtual List<Product> BuildEntityList( table, create a class called ProductRepository to perform
IDataReader rdr) { the actual calls to the database and create the collection
List<Product> ret = new(); of Product objects. Right mouse-click on the project and
add a new folder named RepositoryClasses. Right mouse-
// Loop through all rows in the data reader
while (rdr.Read()) { click on this RepositoryClasses folder and add a new class
// Create new object and add to collection named ProductRepository. Add the code shown below to
ret.Add(new Product { this file.
ProductID = rdr.GetData<int>(“ProductID”),
Name = rdr.GetData<string>(“Name”),
#nullable disable
ProductNumber = rdr.GetData<string>
(“ProductNumber”),
Color = rdr.GetData<string>(“Color”), using AdoNetWrapper.Common;
StandardCost = rdr.GetData<decimal> using AdoNetWrapperSamples.EntityClasses;
(“StandardCost”), using System.Data;
ListPrice = rdr.GetData<decimal>
(“ListPrice”), using System.Data.SqlClient;
SellStartDate = rdr.GetData<DateTime>
(“SellStartDate”), namespace AdoNetWrapperSamples.RepositoryClasses;
SellEndDate = rdr.GetData<DateTime?>
(“SellEndDate”),
DiscontinuedDate = rdr.GetData<DateTime?> public class ProductRepository {
(“DiscontinuedDate”) }
});
} Within the ProductRepository class, create a method called
return ret; Search() as shown in Listing 4. This method accepts a
} connection string and a SQL string. It then creates the
new SqlConnection and SqlCommand objects. It opens the

10 Simplifying ADO.NET Code in .NET 6: Part 1 codemag.com


connection and then invokes the ExecuteReader() method
to create a SqlDataReader object.

The Search() method calls the BuildEntityList() method to


which the SqlDataReader object is passed. This method is
responsible for building a Product object from each row in the
table. Calls to the extension method, GetData<T>(), make sure
null values are handled correctly. Add the BuildEntityList()
method (Listing 5) to the ProductRespository class.

It’s now time to try out the code you wrote to ensure
that you can read data from the Product table and build a
collection of Product objects. Open the Program.cs file and Figure 1: You should see a couple hundred Product objects appear when you run the application.
delete all the code. Add the code shown in Listing 6 to this
file.

Be sure to adjust the connection string to the correct values


for connecting to your server. If you’re unsure about what
connection string to use, visit www.connectionstrings.
com as they provide a wealth of information on how to build
connection strings for all sorts of databases.

Try It Out
Run the console application, and if you typed everything in
correctly, you should see a set of products displayed, along
with the total items returned, as shown in Figure 1.

Read Connection String


from appsettings.json Figure 2: Ensure that the appsettings.json file is always
Instead of hard-coding your connection string, it’s a better copied to the distribution directory.
idea to put that connection string into a .json file. Right
mouse-click on the project and select Add > New Item… from
the context-sensitive menu. Click on the General tab and locate using Microsoft.Extensions.Configuration;
the JSON File item. Set the Name property to appsettings. using Microsoft.Extensions.DependencyInjection;
json and click the Add button. Add the code shown below into using Microsoft.Extensions.Hosting;
the .json file. Be sure to adjust the connection string to the
one you need for your server. Also, your connection string Delete the line of code you wrote earlier that defines and
shouldn’t be broken across multiple lines, I had to do that for initializes the ConnectString variable and replace it with
formatting purposes of this magazine. the following lines of code. These lines of code create a
hosting service from which you can request a Configuration
{ service. You then call the GetValue<string>() method to
“ConnectionStrings”: { read the connection string from the appsettings.json file.
“DefaultConnection”: “Server=Localhost;
Database=AdventureWorksLT; // Setup Host
Integrated Security=Yes” using IHost host =
} Host.CreateDefaultBuilder().Build();
}
// Ask service provider for configuration
Make sure that the appsettings.json file is always copied IConfiguration config = host
to the directory where the program is run from. Click on .Services.GetRequiredService<IConfiguration>();
the appsettings.json file to give it focus, then right mouse-
click on it, and select Properties from the context-sensitive // Get connection string
menu. From within the Properties window, change the Copy string ConnectString = config.GetValue<string>
to Output Directory property to Copy always as shown in (“ConnectionStrings:DefaultConnection”);
Figure 2.
Try It Out
Add the Hosting Package Run the application and, if you typed everything in correctly,
To use the Configuration service used to read configuration everything should work just as it did before.
settings in .NET 6, add a package called Microsoft
Extensions Hosting. Right mouse-click on the project and
select Manage NuGet Packages… Click on the Browse
Create a Generic Database
tab, enter Microsoft.Extensions.Hosting and press Enter. Context Class
In the list below, highlight the package and click on the If you look at the code in Listing 4, you can figure out that
Install button to add this package to your project. Open the you don’t really want to write this code repeatedly for each
Program.cs file and add a few using statements. table you wish to read data from. There are a few problems

codemag.com Simplifying ADO.NET Code in .NET 6: Part 1 11


with this code. First off, there’s a lot of it, and secondly, you
have hard-coded references to the “Sql*” classes. What if
you want to use Oracle, SqlLite, or another database server
at some point? It would be nice if you had a more generic
approach.

To achieve this goal, you’re going to build a few classes, as


shown in Figure 3. The first class is called DatabaseContext,
which is an abstract class that only contains methods used
to create connections, commands, parameters, and data
readers. This class implements the IDisposable interface so
it can be wrapped within a using. The methods for creating
the various objects are meant to be overridden by the
SqlServerDatabaseContext, or the OracleDatabaseContext
Figure 3: Build classes classes with concrete implementations of the specific
from the most generic objects for their database provider. For example, the
to the most specific CreateConnection() method in the DatabaseContext looks
for a good architecture. like the following:

public abstract IDbConnection


Listing 6: Create a ProductRepository object and search for all product data CreateConnection(string connectString);
#nullable disable
The overridden method in the SqlServerDatabaseContext
using AdoNetWrapperSamples.EntityClasses; class returns an actual SqlConnection object as shown in
using AdoNetWrapperSamples.RepositoryClasses; the following code snippet:
string ConnectString = “Server=Localhost;
Database=AdventureWorksLT; public override SqlConnection
Integrated Security=Yes”; CreateConnection(string connectString) {
string Sql = “SELECT * FROM SalesLT.Product”;
return new SqlConnection(connectString);
ProductRepository repo = new(); }
List<Product> list = repo.Search
(ConnectString, Sql); The SqlServerDatabaseContext and OracleDatabaseContext
Console.WriteLine(“*** Display the Data ***”); classes are meant to be generic to work with any SQL Server or
// Display Data Oracle database respectively. The AdvWorksDbContext class
foreach (var item in list) { shown in Figure 3 inherits from the SqlServerDatabaseContext
Console.WriteLine(item.ToString());
} and is meant to work with just the AdventureWorksLT
database. The Repository classes use the services of the
Console.WriteLine(); SqlServerDatabaseContext to read, insert, update, and
Console.WriteLine($”Total Items: {list.Count}”); delete data from their appropriate tables.
Console.WriteLine();

Build the Abstract DatabaseContext Class


Right mouse-click on the Common folder and create a new
Listing 7: The abstract DatabaseContext class provides a generic interface for creating class named DatabaseContext as shown in Listing 7. This
ADO.NET objects class has a constructor to which a connection string must
#nullable disable be passed. After all, without a connection string, there’s no
way you’re going to be able to interact with a database. It
using System.Data;
has a few public properties, Init() and Dispose() methods.
namespace AdoNetWrapper.Common; Most of the properties are self-explanatory, but the
ParameterPrefix property is going to be implementation-
/// <summary>
/// Abstract base class for all
dependent. For example, when using SQL Server, parameters
/// ADO.NET database operations are prefixed with the at sign (@). However, when using
/// </summary> Oracle, parameters are prefixed with the colon (:). In the
public abstract partial class DatabaseContext Init() method of the SqlServerDatabaseConnect class is
: IDisposable {
public DatabaseContext(string connectString) { where you initialize the ParameterPrefix to the @ sign.
ConnectionString = connectString;
Init(); Create a Dispose Method
} Add a Dispose() method (Listing 8) to the DatabaseContext
public string ConnectionString { get; set; } class to check the various properties to ensure that
public string ParameterPrefix { get; set; } they’re closed and disposed of properly. Check the
public IDbCommand CommandObject { get; set; } DataReaderObject property and if it’s not null, call the
public IDataReader DataReaderObject
{ get; set; } Close() and Dispose() methods on this object. Next, check
the CommandObject property and ensure it’s not null.
protected virtual void Init() { If it’s not, check the Connection property, and if it’s not
ParameterPrefix = string.Empty; null, check whether the Transaction property is not null
}
} and dispose of the Transaction property. Call the Close()
and Dispose() on the Connection property and then finally

12 Simplifying ADO.NET Code in .NET 6: Part 1 codemag.com


dispose of the CommandObject. The Dispose() method in Listing 8: You must have a Dispose() method in order to ensure there are no memory leaks
this class runs when you either explicitly call the Dispose()
public virtual void Dispose() {
method or the instance of the DatabaseContext goes out of // Close/Dispose of data reader object
scope when wrapped within a using. if (DataReaderObject != null) {
DataReaderObject.Close();
Now that you have the basics of the DatabaseContext DataReaderObject.Dispose();
}
class created, add a couple of method overloads to create
connection objects. In this class, it’s important to only use // Close/Dispose of command object
the interface objects such as IDbConnection so this class is if (CommandObject != null) {
completely database independent. if (CommandObject.Connection != null) {
if (CommandObject.Transaction != null) {
CommandObject.Transaction.Dispose();
public virtual IDbConnection CreateConnection() { }
return CreateConnection(ConnectionString); CommandObject.Connection.Close();
CommandObject.Connection.Dispose();
} }
CommandObject.Dispose();
public abstract IDbConnection }
CreateConnection(string connectString); }

The next two overloaded methods are used to build command


objects. Use the IDbCommand interface as the return values You need one additional method for creating a data Getting the Sample Code
from these methods. reader. This method accepts a command object and the
CommandBehavior enumeration. The default for the You can download the sample
public virtual IDbCommand CreateCommand CommandBehavior is to close the connection when the code for this article by
(string sql) { reader is closed, but the optional parameter allows you to visiting www.CODEMag.com
return CreateCommand(CreateConnection(), sql); change that behavior if needed. under the issue and article,
}
or by visiting www.pdsa.com/
downloads. Select “Articles”
This method is the one that is called by the previous two
from the Category drop-down.
public abstract IDbCommand CreateCommand methods you created. It opens the connection, and calls
Then select “Simplifying
(IDbConnection cnn, string sql); the ExecuteReader() method on the command object to
ADO.NET Code in .NET 6 - Part 1”
build the actual data reader object. The data reader object from the Item drop-down.
When you need to build parameters for either dynamic created is placed into the DataReaderObject property.
SQL or for submitting stored procedures, you need a
couple of methods to create parameters. These methods public virtual IDataReader CreateDataReader(
use the IDataParameter interface, but when you create IDbCommand cmd, CommandBehavior cmdBehavior =
the SqlServerDatabaseContext class, you return an actual CommandBehavior.CloseConnection) {
SqlParameter object. // Open Connection
cmd.Connection.Open();
public abstract IDataParameter CreateParameter
(string paramName, object value); // Create DataReader
DataReaderObject =
public abstract IDataParameter CreateParameter(); cmd.ExecuteReader(cmdBehavior);

The next method to create helps you retrieve any output return DataReaderObject;
parameters returned from a stored procedure. This method }
GetParameter() returns an IDataParameter object, but
is an abstract method so you must override it in the
SqlServerDataContext, or OracleDataContext class you
Create a SQL Server Database
create. Context Class
Now that you have the generic DatabaseContext class
public abstract IDataParameter GetParameter created, build the SqlServerDatabaseContext class. This class
(string paramName); supplies the concrete “Sql*” objects to work with any SQL
Server database. Right mouse-click on the Common folder
The next two methods are used to create an instance of a and create a new class named SqlServerDatabaseContext.
data reader object. Ensure you’re returning the IDataReader Add the code shown in Listing 9. This class has a constructor
interface from these methods in this class. that must accept a connection string, and it passes it directly
to the base class. In the Init() method, which is called from
public virtual IDataReader CreateDataReader() { the base class, is where you set the ParameterPrefix property
return CreateDataReader(CommandObject, to the at (@) sign, which will be used by the methods dealing
CommandBehavior.CloseConnection); with parameters.
}
Let’s now add the various overrides to the methods that
public virtual IDataReader CreateDataReader( create the concrete implementations of the SQL objects.
CommandBehavior cmdBehavior) { Start by overriding the CreateConnection() method to
return CreateDataReader(CommandObject, return a SqlConnection object. You only need to override the
cmdBehavior); one method as the other method in the base class simply
} calls this one.

codemag.com Simplifying ADO.NET Code in .NET 6: Part 1 13


Listing 9: The SqlServerDatabaseContext class inherits from the DatabaseContext class and Some stored procedures return an OUTPUT parameter. After
initializes the ParameterPrefix property in the Init() method. submitting a command, you can retrieve one of the OUTPUT
#nullable disable parameters by querying the appropriate parameter object
and requesting the value. The GetParameter() method
using System.Data; shown below accepts a parameter name to locate, ensures
using System.Data.SqlClient;
it starts with the appropriate prefix, and then accesses the
namespace AdoNetWrapper.Common; Parameters property on the CommandObject to retrieve
the value.
/// <summary>
/// Database context using ADO.NET
public override SqlParameter GetParameter
/// for SQL Server Databases
/// </summary> (string paramName) {
public partial class SqlServerDatabaseContext if (!paramName.StartsWith(ParameterPrefix)) {
: DatabaseContext { paramName = ParameterPrefix + paramName;
}
public SqlServerDatabaseContext
(string connectString)
: base(connectString) { } return ((SqlCommand)CommandObject)
.Parameters[paramName];
protected override void Init() { }
base.Init();

ParameterPrefix = “@”; The last method to override is the one that creates the
} SqlDataReader object. Pass in an instance of a command
} object and optionally, the CommandBehavior to use after
closing the reader.

public override SqlConnection CreateConnection public override SqlDataReader CreateDataReader


(string connectString) { (IDbCommand cmd, CommandBehavior
return new SqlConnection(connectString); cmdBehavior = CommandBehavior
} .CloseConnection) {
// Open Connection
Override the one method for creating a command object cmd.Connection.Open();
because the other two methods in the base class call this // Create DataReader
one. In this method, set the CommandObject property DataReaderObject = cmd
with the new SqlCommand() object you create. You need .ExecuteReader(cmdBehavior);
to have this command object property public so you can
add parameters to it and dispose of it when the database return (SqlDataReader)DataReaderObject;
context class goes out of scope. }

public override SqlCommand CreateCommand You’re not going to create the OracleDatabaseContext class
(IDbConnection cnn, string sql) { in these articles. However, you follow the same procedures I
CommandObject = new SqlCommand(sql, did in this section for building the SqlServerDatabaseContext
(SqlConnection)cnn); class, just substituting the appropriate “Oracle*” classes for
CommandObject.CommandType = CommandType.Text; the “Sql*” classes used here.

return (SqlCommand)CommandObject; Modify the Search() Method to Use the SQL Server
} Database Context Class
Now that you have built the generic SqlServerDatabaseContext
The next methods to override are the ones that create new class, you should use this to perform the searching. Open the
parameter objects. The first method accepts a parameter ProductRepository.cs file and remove the using System.
name and the value to assign to the parameter. It checks Data.SqlClient; statement. Modify the Search() method to
to see if the paramName parameter starts with the value look like the following code.
in the ParameterPrefix property or not. If it doesn’t, then
the ParameterPrefix is added to the paramName parameter public virtual List<Product> Search
before the SqlParameter object is created. The second (string connectString, string sql) {
method returns an empty SqlParameter object. List<Product> ret;

public override SqlParameter CreateParameter using SqlServerDatabaseContext dbContext =


(string paramName, object value) { new(connectString);
if (!paramName.StartsWith(ParameterPrefix)) {
paramName = ParameterPrefix + paramName; // Create Command Object with SQL
} dbContext.CreateCommand(sql);
return new SqlParameter(paramName, value);
} ret = BuildEntityList(
dbContext.CreateDataReader());
public override SqlParameter CreateParameter() {
return new SqlParameter(); return ret;
} }

14 Simplifying ADO.NET Code in .NET 6: Part 1 codemag.com


As you can see, this code is much simpler, but still takes
advantage of the using to make sure all resources are
appropriately disposed of. You first create an instance of ®
the SqlServerDatabaseContext class as part of the using.
You then create a command object with the SQL to submit.
Finally, you create the data reader and pass it to the
BuildEntityList() method.

Try It Out Instantly Search


Terabytes
Run the console application again, and you should see
the same number of product objects displayed as you saw
earlier.

Generically Creating an Entity List


The code for working with a database has been greatly
simplified, but let’s now work on the BuildEntityList()
method. This method is hard-coded to build just Product
objects. Let’s make this a generic method to which you can dtSearch’s document filters support:
pass any type of object to build a collection out of. • popular file types
You’re going to use .NET Reflection to build the objects. Yes, • emails with multilevel attachments
I know what you’re thinking: Isn’t reflection very slow? It
used to be, but starting with .NET 3.5, Microsoft did a serious
• a wide variety of databases
overhaul of the reflection classes to ensure that it would • web data
be fast enough to support the new Entity Framework they
were releasing at the same time. In fact, the code shown
in Listing 10 is like the code the Entity Framework uses for
populating entity collections. Open the ProductRepository.
cs file and add a new using statement.
Over 25 search options including:
• efficient multithreaded search
using System.Reflection;
• easy multicolor hit-highlighting
Turn the Search() method into a generic method by passing • forensics options like credit card search
in a type parameter, <TEntity>, as seen in the code below.

public virtual List<TEntity> Search<TEntity>


(string connectString, string sql) {
List<TEntity> ret; Developers:
using SqlServerDatabaseContext dbContext
• SDKs for Windows, Linux, macOS
= new(connectString); • Cross-platform APIs cover C++, Java
dbContext.CreateCommand(sql); and recent .NET (through .NET 6)
ret = BuildEntityList<TEntity> • FAQs on faceted search, granular data
(dbContext.CreateDataReader()); classification, Azure, AWS and more
return ret;
}

When you make the call to the Search() method from the
Program.cs file, specify the name of the class to create as
the list of objects. In the code shown below, you pass in Visit dtSearch.com for
Product to the <TEntity> type parameter. • hundreds of reviews and case studies
List<Product> list = • fully-functional enterprise and
repo.Search<Product>(ConnectString, Sql); developer evaluations
Make BuildEntityList() a Generic Method
Modify the BuildEntityList() to be a generic method as The Smart Choice for Text
shown in Listing 10. Add the <TEntity> type parameter like
you did in the Search() method. The SqlDataReader object
Retrieval® since 1991
has a GetName() method, which retrieves the current
column name via an index. You need to be able to map dtSearch.com 1-800-IT-FINDS
this column name to the same property name in the type
of class passed in. Use the typeof() method on the type

codemag.com Simplifying ADO.NET Code in .NET 6: Part 1 15


Listing 10: Use reflection to read the data and create a list of any type of entity classes
protected virtual List<TEntity>
BuildEntityList<TEntity>(IDataReader rdr) { // Get property that matches the field name
List<TEntity> ret = new(); PropertyInfo col = props.FirstOrDefault(
string columnName; col => col.Name == columnName);

// Get all the properties in <TEntity> if (col != null) {


PropertyInfo[] props = // Get the value from the table
typeof(TEntity).GetProperties(); var value = rdr[columnName];
// Assign value to property if not null
// Loop through all rows in the data reader if (!value.Equals(DBNull.Value)) {
while (rdr.Read()) { col.SetValue(entity, value, null);
// Create new instance of Entity }
TEntity entity = }
Activator.CreateInstance<TEntity>(); }
// Add new entity to the list
// Loop through columns in data reader ret.Add(entity);
for (int index = 0; index < rdr.FieldCount; }
index++) {
// Get field name from data reader return ret;
columnName = rdr.GetName(index); }

parameter passed in and then call the GetProperties() be camel case. Or maybe you just want your property name
method to return all the properties in the class. You now to be more descriptive than the column name is. Whatever,
have all the property names, and you can get the column the reason, simply add a [Column] attribute above the
name from the data reader. All you need to do now is to property you want to be different. Open the Product.cs file
map the value from the data reader into the property of and rename the ProductID and the Name properties to be
the class. Id and ProductName respectively.

Loop through each row in the Product table and each time public int Id { get; set; }
through, create a new instance of a Product object using public string ProductName { get; set; }
the Activator.CreateInstance() method. Loop through each
column in the data reader object and set the field name Above the Id property, add the [Column] attribute to specify
into the variable named columnName. This code assumes the name of the actual column in the table.
that the column name in the table exactly matches the
property name in the class. Later you’re going to learn how [Column(“ProductID”)]
to map column names to property names using an attribute. public int Id { get; set; }
Look up the property by calling the props.FirstOrDefault()
method to locate where the property name is the same as Do the same for the renamed ProductName property, and
the column name. add the [Column] attribute to be the actual name of the
column in the table.
If a property is found with the same name, check to see if
the value in the column is not a DbNull value. If it isn’t, [Column(“Name”)]
use the reflection SetValue() method on the property to public string ProductName { get; set; }
set the value on the newly created instance of the Product
class to the value from the column in the table. Add the new Check for the [Column] Attribute When Reading Data
instance of the Product class to the collection and repeat Add some code to the BuildEntityList() to take advantage of
this process until all rows in the table have been processed. the [Column] attribute. Open the ProductRepository.cs file
and add a using statement at the top of the file.
Try It Out
Open the Program.cs file and modify the call to the Search() using System.ComponentModel.DataAnnotations
method to look like the following code. .Schema;

List<Product> list = In the BuildEntityList() method, modify the code within


repo.Search<Product>(ConnectString, Sql); the loop, as shown in Listing 11. The first part of this
code is the same. The difference is that you’re checking
Run the console application and, once again, you should see to see if the variable col is a null, and if it is, that means
the same product objects displayed. the column name didn’t match any property names. Call
the FirstOrDefault() method again on the props collection,
but you’re now looking for any column that has a [Column]
Handle the [Column] Attribute attribute, and whether the Name property in the [Column]
on the Entity Class attribute matches the column name from the data reader. If
One of the problems with the way the BuildEntityList() you find it, the col property is now set to a valid property,
method is written, is that it assumes the column name is and now you can set the value into that property.
the same as the property name in your class. There are many
times when you might have column names with characters Try It Out
that aren’t valid in property names. Or maybe your column Run the console application again and you should see the
names are all upper-case and you want your properties to same product objects displayed.

16 Simplifying ADO.NET Code in .NET 6: Part 1 codemag.com


protected virtual void Init() {
Build SELECT Statement SchemaName = “dbo”;
from Properties TableName = string.Empty;
If you take a look at the SalesLT.Product table in the SQL = string.Empty;
AdventureWorksLT database, you’ll see that there are many Columns = new();
more columns than properties in the Product class. This }
means that sending the SELECT * FROM SalesLT.Product SQL
statement to the server returns much more data than is
needed. Obviously, a less wasteful approach is needed. Listing 11: Add code to check for the [Column] attribute
for (int index = 0; index < rdr.FieldCount;
Actually, there’s no need to pass in a SQL statement to the index++) {
Search() method at all, as the statement can be inferred // Get field name from data reader
from the properties within the Product class. So, let’s columnName = rdr.GetName(index);
// Get property in entity that matches
eliminate the SQL statement and read the property names // the field name
(and any [Column] attributes) to build the SELECT statement PropertyInfo col = props.FirstOrDefault(
to send to the database. col => col.Name == columnName);

if (col == null) {
Let’s also add some additional functionality to check // Is column name in a [Column] attribute?
whether a [NotMapped] attribute has been added to any col = props.FirstOrDefault(
properties. Sometimes you need additional properties in c => c.GetCustomAttribute
your entity class, but those properties aren’t mapped to any <ColumnAttribute>()?.Name == columnName);
column in the table, this is what the [NotMapped] attribute }
is for. To illustrate the [NotMapped] attribute, open the if (col != null) {
Product.cs file and add the following property somewhere // Get the value from the table
within this class. var value = rdr[columnName];
// Assign value to the property if not null
if (!value.Equals(DBNull.Value)) {
[NotMapped] col.SetValue(entity, value, null);
public bool IsSelected { get; set; } }
}
Instead of gathering the properties of the class within the }
BuildEntityList() method, let’s move this functionality into
a new method. The new method uses reflection to collect
the column name and the property information for each
property in the entity class and return that as a collection
of objects. Create a class to hold that information by right ADVERTISERS INDEX
mouse-clicking on the Common folder and adding a new
class called ColumnMapper as shown in the code snippet
below. Advertisers Index
#nullable disable Apex
www.apexdatasolutions.com 63
using System.Reflection;
CODE Consulting
namespace AdoNetWrapper.Common; www.codemag.com/code 7
CODE Legacy Beach
public class ColumnMapper {
www.codemag.com/modernize 76
public string ColumnName { get; set; }
public PropertyInfo PropertyInfo { get; set; } DevIntersection
} www.devintersection.com 2
dtSearch
Open the ProductRepository.cs file and add a few
www.dtSearch.com 15
new properties to this class. You can see the collection Advertising Sales:
of ColumnMapper classes that are going to hold the LEAD Technologies Tammy Ferguson
832-717-4445 ext 26
information about each property in the Product entity www.leadtools.com 5 tammy@codemag.com
class. There are also properties for the schema and table
UAV
name of the table this repository class is working with. The
www.expouav.com 75
SQL property will hold the last statement used to retrieve
data from the table.

public string SchemaName { get; set; }


public string TableName { get; set; }
public List<ColumnMapper> Columns { get; set; } This listing is provided as a courtesy
public string SQL { get; set; } to our readers and advertisers.
The publisher assumes no responsibi-
Add a method named Init() to the ProductRepository class lity for errors or omissions.
to initialize some of these properties to valid start values.

codemag.com Simplifying ADO.NET Code in .NET 6: Part 1 17


Listing 12: Add a method to build a collection of column information from the properties of your entity class
protected virtual List<ColumnMapper> ColumnName = prop.Name
BuildColumnCollection<TEntity>() { };
List<ColumnMapper> ret = new();
ColumnMapper colMap; // Is column name in [Column] attribute?
ColumnAttribute ca = prop
// Get all the properties in <TEntity> .GetCustomAttribute<ColumnAttribute>();
PropertyInfo[] props = if (ca != null &&
typeof(TEntity).GetProperties(); !string.IsNullOrEmpty(ca.Name)) {
// Set column name from [Column] attr
// Loop through all properties colMap.ColumnName = ca.Name;
foreach (PropertyInfo prop in props) { }
// Is there a [NotMapped] attribute?
NotMappedAttribute nm = prop // Create collection of columns
.GetCustomAttribute<NotMappedAttribute>(); ret.Add(colMap);
// Only add properties that map to a column }
if (nm == null) { }
// Create a column mapping object
colMap = new() { return ret;
// Set column properties }
PropertyInfo = prop,

Listing 13: Add a method to build the SELECT SQL statement to submit to the database ColumnMapper object and set the PropertyInfo property to
protected virtual string the current PropertyInfo object, and the property name to
BuildSelectSql<TEntity>() { the ColumnName property.
Type typ = typeof(TEntity);
StringBuilder sb = new(2048); Next, check for a [Column] attribute on the property. If the
string comma = string.Empty;
[Column] attribute exists, make sure the Name property
// Build Column Mapping Collection has been filled in on that attribute. It’s possible to just
Columns = BuildColumnCollection<TEntity>(); specify a type for the column and not set the name for the
column when using the [Column] attribute. If the Name
// Set Table and Schema properties
SetTableAndSchemaName(typ);
property exists, replace the ColumnName property in the
ColumnMapper object. Finally, add the ColumnMapper
// Build the SELECT statement object to the collection to be returned, and repeat this
sb.Append(“SELECT”); process until all properties have been processed.
foreach (ColumnMapper item in Columns) {
// Add column
sb.Append($”{comma} [{item.ColumnName}]”); Add a Method to Set Table and Schema Properties
comma = “,”; To build the SELECT statement, you need the schema name
} and table name in the database you’re building the SQL
// Add ‘FROM schema.table’ for. By default, the SchemaName property is initialized to
sb.Append($” FROM {SchemaName}.{TableName}”);
“dbo,” as that’s the most common schema in SQL Server.
return sb.ToString(); The TableName property is set to the entity class name.
} However, most entity classes use the [Table] attribute to
specify the table name and optionally the schema name.
Create a method in your RepositoryBase class named
Add a constructor to the ProductRepository() class to call SetTableAndSchemaName and add the following code to
the Init() method. this method.

public ProductRepository() { protected virtual void


Init(); SetTableAndSchemaName(Type typ) {
} // Is there is a [Table] attribute?
TableAttribute table =
Add a Method to Build a Collection of Columns typ.GetCustomAttribute<TableAttribute>();
Let’s add a new method named BuildColumnCollection() // Assume table name is the class name
to build the collection of ColumnMapper objects, as shown TableName = typ.Name;
in Listing 12, and remove this functionality from the if (table != null) {
BuildEntityList() method. Once this collection is built, // Set properties from [Table] attribute
you’re going to be able to do two things: build the SELECT TableName = table.Name;
statement to submit to retrieve the data and map the data SchemaName = table.Schema ?? SchemaName;
from the data reader to the properties in the class. }
}
In this method, call the GetProperties() method on the
TEntity type passed in. Once you have all the properties Add Method to Build a SELECT Statement
from the class, loop through each one and check to see if You’re now going to create a method called BuildSelectSql()
there’s a [NotMapped] attribute on each column, if there to call the BuildColumnCollection() method you just wrote.
is, bypass that column and don’t add it to the collection After building the column collection, loop through it and
of ColumnMapper objects. Otherwise, create a new use the ColumnName property to create a SELECT statement

18 Simplifying ADO.NET Code in .NET 6: Part 1 codemag.com


Listing 14: The BuildEntityList() is now much simpler because the column information is pre-built
protected virtual List<TEntity>
BuildEntityList<TEntity>(IDataReader rdr) { // Assign value to the property if not null
List<TEntity> ret = new(); if (!value.Equals(DBNull.Value)) {
Columns[index].PropertyInfo
// Loop through all rows in the data reader .SetValue(entity, value, null);
while (rdr.Read()) { }
// Create new instance of Entity }
TEntity entity = Activator
.CreateInstance<TEntity>(); // Add new entity to the list
ret.Add(entity);
// Loop through columns collection }
for (int index = 0; index < Columns.Count;
index++) { return ret;
// Get the value from the reader }
var value = rdr[Columns[index].ColumnName];

to submit to the database. Use a StringBuilder object to List<Product> list = SPONSORED SIDEBAR:
create the SELECT statement, so bring in the System.Text repo.Search<Product>(ConnectString);
namespace by adding a using statement at the top of the Get .NET 6 Help
ProductRepository class. Run the console application and you should see the same for Free
product objects displayed.
using System.Text; How does a FREE hour-long
CODE Consulting virtual
Add the BuildSelectSql() method to the repository class, Summary meeting with our expert
.NET consultants sound?
as shown in Listing 14. Call the BuildColumnCollection() In this article, you learned how to use .NET reflection to
Yes, FREE. No strings.
method to fill in the Columns property. Call the create a generic way to build a collection of entity objects No commitment. No credit
SetTableAndSchemaName() method to set the TableName from any table in a database. You learned how to use the cards. Nothing to buy.
and SchemaName properties. Build the SELECT statement [Column] and the [NotMapped] attributes to set the column For more information,
by iterating over the Columns collection and adding each name or ignore properties in your entity class. An abstract visit www.codemag.com/
ColumnName property to the SELECT statement. base class was illustrated from which you then provided consulting or email us at
a concrete implementation for accessing any SQL Server info@codemag.com.
Modify the Search() Method database. This same base class could then be used as the
Modify the Search() method as shown in Listing 14. Remove basis for any other database context classes you need to
the string sql parameter passed in. Add the call to the access Oracle, SQL Lite, or other databases that have a .NET
BuildSelectSql() method and place the resulting SELECT provider.
statement into the SQL property of this class. Pass the SQL
property to the CreateCommand() method. Then call the In the next article, you’re going to refactor the repository
BuildEntityList() method as you did before. class so it’s much more generic. You’ll also add search
capabilities to your repository class, retrieve a scalar value,
public virtual List<TEntity> Search<TEntity> work with multiple result sets, and call stored procedures.
(string connectString) {
List<TEntity> ret;  Paul D. Sheriff

// Build SELECT statement
SQL = BuildSelectSql<TEntity>();

using SqlServerDatabaseContext dbContext =


new(connectString);
dbContext.CreateCommand(SQL);

// REST OF THE CODE HERE

return ret;
}

Modify the BuildEntityList Method


Because you added the code to check for the [Column] attribute
and to ignore any columns with the [NotMapped] attribute,
the BuildEntityList() method is now much simpler. Replace the
BuildEntityList() method with the code in Listing 14.

Try It Out
Open the Program.cs file, remove the Sql declaration and
assignment, and remove it from the second parameter
passed to the Search() method.

codemag.com Simplifying ADO.NET Code in .NET 6: Part 1 19


ONLINE QUICK ID 2207031

Writing High-Performance Code Using


Span<T> and Memory<T> in C#
In this article, you’ll be introduced to the new types introduced in C# 7.2: Span and Memory. I’ll take a deep dive into Span<T>
and Memory<T> and demonstrate how to work with them in C#.

Prerequisites Newly Added Types in .NET Core 2.1


If you’re to work with the code examples discussed in this The newly introduced types in .NET Core 2.1 are:
article, you need the following installed in your system:
• System.Span: This represents a continuous section of ar-
• Visual Studio 2022 bitrary memory in a type-safe and memory-safe manner.
• .NET 6.0 • System.ReadOnlySpan: This represents a type-safe
• ASP.NET 6.0 Runtime and memory-safe read-only representation of an ar-
bitrary contiguous area of memory.
If you don’t already have Visual Studio 2022 installed on • System.Memory: This represents a contiguous mem-
your computer, you can download it from here: https:// ory area.
Joydip Kanjilal
visualstudio.microsoft.com/downloads/. • System.ReadOnlyMemory: Similar to ReadOnlySpan,
joydipkanjilal@yahoo.com this type represents a continuous section of memory.
Joydip Kanjilal is an MVP However, unlike ReadOnlySpan, it’s not a by ref type.
(2007-2012), software Types of Memory Supported in .NET
Microsoft .NET enables you to work with three types of
architect, author, and
speaker with more than memory that include: Accessing Contiguous Memory:
20 years of experience. Span<T> and Memory<T>
He has more than 16 years • Stack memory: Resides in the Stack and is allocated You might often need to work with massive volumes of
of experience in Microsoft using the stackalloc keyword data in your applications. String handling is critical in any
.NET and its related • Managed memory: Resides in the heap and is man- application because you must follow the recommended
technologies. Joydip has aged by the GC practices to avoid unnecessary allocations. You can use
authored eight books, • Unmanaged memory: Resides in the unmanaged heap unsafe code blocks and pointers to directly manipulate
more than 500 articles, and is allocated by calling the Marshal.AllocHGlobal or memory, but this approach has considerable risks involved.
and has reviewed more Marshal.AllocCoTaskMem methods Pointer manipulations are prone to bugs such as overflows,
than a dozen books. null-pointer accesses, buffer overruns, and dangling
pointers. If the bug affects only the stack or
static memory areas, it will be harmless;
but if it affects critical system memory
areas, it may cause your application
to crash. Enter Span<T> and
Memory<T>.

©s
hu
tte
rs tock
/Buntoon Rodseng

20 Writing High-Performance Code Using Span<T> and Memory<T> in C# codemag.com


Span<T> and Memory<T> have been newly introduced in • IntPtr
.NET. They provide a type-safe way to access contiguous • stackalloc
regions of arbitrary memory. Both Span<T> and Memory<T>
are a part of the System namespace and represent a You can convert all of the following to ReadOnlySpan<T>:
contiguous block of memory, sans any copy semantics.
Span<T>, Memory<T>, ReadOnlySpan, ReadOnlyMemory • Arrays
types have been newly added to C# and can help you to • Pointers
work with memory directly in a safe and performant matter. • IntPtr
• stackalloc
These new types are part of the System.Memory namespace • string
and are intended to be used in high performance scenarios
where you need to process large amounts of data or want Span<T> is a stack-only type; precisely, it’s a by-ref type.
to avoid unnecessary memory allocations, such as when Thus, spans can neither be boxed nor appear as fields of
working with buffers. Unlike array types that allocate stack-only type, nor can they be used in generic parameters.
memory on the GC heap, these new types provide an However, you can use spans to represent return values or
abstraction over contiguous regions of arbitrary managed method arguments. Refer to the code snippet given below
or native memory without allocating on the GC heap. that illustrates the complete source code of the Span struct:

The Span<T> and Memory<T> structs provide low-level public readonly ref struct Span<T> {
interfaces to an array, string, or any contiguous managed internal readonly
or unmanaged memory block. Their primary function is to ByReference<T> _pointer;
foster micro-optimization and write low-allocation code that private readonly int _length;
reduces managed memory allocations, thus decreasing the //Other members
strain on the garbage collector. They also allow for slicing or }
dealing with a section of an array, string, or memory block
without duplicating the original chunk of memory. Span<T> You can take a look at the complete source code of the struct
and Memory<T> are very beneficial in high-performance Span<T> here: https://github.com/dotnet/corefx/blob/
areas, such as the ASP.NET 6 request-processing pipelines. master/src/Common/src/CoreLib/System/Span.cs.

The Span<T> source code shows that it basically comprises


An Introduction to Span<T> two read-only fields: a native pointer and a length property
Span<T> (earlier known as Slice) is a value type introduced denoting the number of elements that the Span contains.
in C# 7.2 and .NET Core 2.1 with almost zero overhead. It
provides a type-safe way to work with a contiguous block of Span may be used in the same ways that an array can.
memory such as: However, unlike arrays, it can refer to stack memory, i.e.,
memory allocated on the stack, managed memory, and native
• Arrays and subarrays memory. This provides an easy way for you to take advantage
• Strings and substrings of performance improvements that were previously only
• Unmanaged memory buffers available when dealing with unmanaged code.

A Span type represents a contiguous chunk of memory Here’s how Span<T> is declared in the System namespace.
that resides in the managed heap, the stack, or even in
unmanaged memory. If you create an array of a primitive public readonly ref struct Span<T>
type, it’s allocated on the stack and doesn’t require garbage
collection to manage its lifetime. Span<T> is capable of To create an empty Span, you can use the Span.Empty
pointing to a chunk of memory allocated whether on the property:
stack or on the heap. However, because Span<T> is defined
as a ref struct, it should reside only on the stack. Span<char> span = Span<char>.Empty;

The following are the characteristics of Span<T> at a glance: The following code snippet shows how you can create a
byte array in the managed memory and then create a span
• Value type instance out of it.
• Low or zero overhead
• High performance var array = new byte[100];
• Provides memory and type safety var span = new Span<byte>(array);

You can use Span with any of the following


Programming Span<T> in C#
• Arrays Here’s how you can allocate a chunk of memory in the stack
• Strings and use a Span to point to it:
• Native buffers
Span<byte> span = stackalloc byte[100];
The list of types that can be converted to Span<T> are:
The following code snippet shows how you can create a Span
• Arrays using a byte array, store integers inside the byte array, and
• Pointers calculate the sum of all the integers stored.

codemag.com Writing High-Performance Code Using Span<T> and Memory<T> in C# 21


byte data = 0;
for (int index = 0; index <
span.Length; index++)
span[index] = data++;

int sum = 0;
foreach (int value in span)
sum += value;

Console.WriteLine
($”The sum of the numbers
in the array is {sum}”);
Marshal.FreeHGlobal(nativeMemory);

You can also allocate a Span in the stack memory using the
stackalloc keyword, as shown below:

byte data = 0;
Span<byte> span =
stackalloc byte[100];

for (int index = 0; index <


span.Length; index++)
span[index] = data++;

int sum = 0;
foreach (int value in span)
Figure 1: Turn on unsafe compilation for your project to enable unsafe code. sum += value;

Console.WriteLine
($”The sum of the numbers
in the array is {sum}”);

Remember to enable compilation of unsafe code in your


project. To do this, right-click on your project, click Properties,
Figure 2: Integers present in the sliced array displayed at the console window and check the unsafe code checkbox, as shown in Figure 1.

Span<T> and Arrays


var array = new byte[100]; Slicing enables data to be treated as logical chunks that can
var span = new Span<byte>(array); then be processed with minimal resource overhead. Span<T>
can wrap an entire array and, because it supports slicing, you
byte data = 0; can make it point to any contiguous region within the array.
for (int index = 0; index < The following code snippet shows how you can use a Span<T>
span.Length; index++) to point to a slice of three elements within the array.
span[index] = data++;
int[] array = new int[]
int sum = 0; { 1, 2, 3, 4, 5, 6, 7, 8, 9 } ;
foreach (int value in array) Span<int> slice = new Span<int>(array, 2, 3);
sum += value;
There are two overloads of the Slice method available as part
of the Span<T> struct, allowing slices to be created based
The following code snippet creates a Span from the native on an index. This allows the Span <T> data to be treated as a
memory: series of logical chunks that can be processed individually or
var nativeMemory = as desired by sections of a data processing pipeline.
Marshal.AllocHGlobal(100);
Span<byte> span; You can use Span<T> to wrap an entire array. Because it
unsafe supports slicing, it can not only point to the first element
{ of the array but any contiguous range of elements within
span = new Span<byte> the array.
(nativeMemory.ToPointer(), 100);
} foreach (int i in slice)
Console.WriteLine($”{i} “);

You can now use the following code snippet to store integers When you execute the preceding code snippet, the integers
inside the memory pointed to by the Span and display the present in the sliced array will be displayed at the console,
sum of all the integers stored: as shown in Figure 2.

22 Writing High-Performance Code Using Span<T> and Memory<T> in C# codemag.com


Span<T> and ReadOnlySpan<T> don’t have the same restrictions in Memory<T> as you do in
A ReadOnlySpanT> instance is often used to refer to Span<T>. And you can use Memory<T> as a class field, and
array items or a chunk of an array. As opposed to arrays, across await and yield boundaries.
ReadOnlySpanT> instances can refer to native memory,
managed memory, or stack memory. Both Span<T> and ReadOnlyMemory<T>
ReadOnlySpan<T> provide a type-safe representation of a Similar to ReadOnlySpan<T>, ReadOnlyMemory<T>
contiguous region of memory. Although Span<T> provides a represents read only access to a contiguous region of
read-write access to a region of memory, ReadOnlySpan<T> memory but unlike a ReadOnlySpan<T>, it isn’t a byref type.
provides a read-only access to a memory segment.
Now refer to the following string that contains country
The following code snippet illustrates how you can use names separated by space characters.
ReadOnlySpan to slice a portion of a string in C#:
string countries = “India Belgium
ReadOnlySpan<char> readOnlySpan = Australia USA UK Netherlands”;
“This is a sample data for var countries = ExtractStrings
testing purposes.”; (“India Belgium Australia USA UK
int index = readOnlySpan. Netherlands”.AsMemory());
IndexOf(‘ ‘);
var data = ((index < 0) ? The ExtractStrings method extracts each of the country
readOnlySpan : readOnlySpan. names as shown below:
Slice(0, index)).ToArray();
public static IEnumerable
<ReadOnlyMemory
An Introduction to Memory<T> <char>> ExtractStrings
Memory<T> is a reference type that represents a contiguous (ReadOnlyMemory<char> c)
region of memory and has a length, but doesn’t necessarily {
start at index 0 and can be one of many regions inside another int index = 0, length = c.Length;
Memory. The memory represented by the Memory might not for (int i = 0; i < length; i++)
even be your process’s own, as it could have been allocated if (char.IsWhiteSpace(c.Span[i])
in unmanaged code. Memory is useful for representing data || i == length)
in non-contiguous buffers because it allows you to treat them {
like a single contiguous buffer without copying. yield return c[index..i];
index = i + 1;
Here’s how Memory<T> is defined: }
}
public struct Memory<T> {
void* _ptr;
T[] _array; You can call the above method and display the country names
int _offset; at the console window using the following code snippet:
int _length;
var data = ExtractStrings
public Span<T> Span => (countries.AsMemory());
_ptr == null
? new Span<T>(_array, foreach(var str in data)
_offset, _length) {
: new Span<T>(_ptr, _length); Console.WriteLine(str);
} }

In addition to Span<T>, Memory<T> provides a safe and


sliceable view into any contiguous buffer, whether an array Advantages of Span<T> and
or a string. Unlike Span<T>, it has no stack-only constraints Memory<T>
because it’s not a ref-like type. As a result, you can place it The main advantage of using the Span and Memory types
on the heap, use it in collections or with async-await, save is improved performance. You can allocate memory on the
it as a field, or box it, just as you would any other C# struct. stack by using the stackalloc keyword, which allocates an
uninitialized block that’s an instance of type T[size]. This
The Span<T> property allows you to get efficient indexing isn’t necessary if your data is already on the stack, but for
capabilities when you need to modify or process the buffer large objects, it’s useful because arrays allocated in this
referenced by Memory<T>. On the contrary, Memory<T> is way exist only for as long as their scope lasts. If you’re
a more general-purpose and high-level exchange type than using a heap-allocated arrays, you can pass them through
Span<T> with an immutable, read-only counterpart named a method like Slice() and create views without copying any
ReadOnlyMemory<T>. data.

Although both Span<T> and Memory<T> represent a Here are some more advantages:
contiguous chunk of memory, unlike Span<T>, Memory<T>
is not a ref struct. So, contrary to Span<T>, you can have • They reduce the number of allocations for the garbage
Memory<T> anywhere on the managed heap. Hence, you collector. They also reduce the number of copies of

codemag.com Writing High-Performance Code Using Span<T> and Memory<T> in C# 23


the data and provide a more efficient way to work with instance allows for better management of available memory
multiple buffers at once. and provides better performance than many concatenated
• They allow you to write high performance code. For Memory instances.
example, if you have a large chunk of memory that
you need to divide into smaller pieces, use Span as You can create a ReadOnlySequence instance using the
a view of the original memory. This allows your app factory method Create() on the SequenceReader class as
to directly access the bytes from the original buffer well as other methods such as AsReadOnlySequence(). The
without making copies. Create() method has several overloads that allow you to
• They allow you to directly access memory without pass in byte[] or ArraySegment, sequence of byte arrays
copying it. This can be particularly useful when work- (IEnumerable), or IReadOnlyCollection/IReadOnlyList/
ing with native libraries or interop with other lan- IList/ICollection collections of byte arrays (byte[]) and
guages. ArraySegment.
• They allow you to eliminate bounds checking in tight
loops where performance is critical (such as cryptog- You now know that Span<T> and Memory<T> provide
raphy or network packet inspection). support for contiguous memory buffers such as arrays.
• They allow you to eliminate boxing and unboxing costs The System.Buffers namespace contains a struct called
associated with generic collections, like List. ReadOnlySequence<T> that provides support for working with
• They enable writing code that is easier to understand discontiguous memory buffers. The following code snippet
by using a single data type (Span) rather than two illustrates how you can work with ReadOnlySequence<T>
different types (Array and ArraySegment). in C#:

Contiguous and Non-Contiguous int[] array =


{ 1, 2, 3, 4, 5, 6, 7, 8, 9 };
Memory Buffers var readOnlySequence = new
A contiguous memory buffer is a block of memory that holds ReadOnlySequence<int>(array);
the data in sequentially adjacent locations. In other words, var slicedReadOnlySequence =
all of the bytes are next to each other in memory. An array readOnlySequence.Slice(1, 5);
represents a contiguous memory buffer. For example:
You can also use ReadOnlyMemory<T>, as shown below:
int[] values = new int[5];
int[] array = { 1, 2, 3, 4, 5, 6, 7, 8, 9 };
The five integers in the above example will be placed in ReadOnlyMemory<int> memory = array;
five sequential locations in memory starting with the first var readOnlySequence = new
element (values[0]). ReadOnlySequence<int>(memory);
var slicedReadOnlySequence =
In contrast to contiguous buffers, you can use non- readOnlySequence.Slice(1, 5);
contiguous buffers for cases where there are multiple blocks
of data that aren’t located next to one another or when
working with unmanaged code. Span and Memory types A Real-Life Example
were designed specifically for non-contiguous buffers and Let’s now talk about a real-life problem and how Span<T>
provide convenient ways to work with them. and Memory<T> can help. Consider the following array of
strings that contains log data retrieved from a log file:
A non-contiguous region of memory has no guarantee that
the elements are stored in any particular order or that string[] logs = new string[]
they’re stored close together in memory. Non-contiguous {
buffers, such as ReadOnlySequence (when used with “a1K3vlCTZE6GAtNYNAi5Vg::
segments), reside in separate areas of memory that may 05/12/2022 09:10:00 AM::
be scattered across the heap and cannot be accessed by a http://localhost:2923/api/
single pointer. customers/getallcustomers”,
“mpO58LssO0uf8Ced1WtAvA::
For example, IEnumerable is non-contiguous because there’s 05/12/2022 09:15:00 AM::
no way to know where the next item will be until you have http://localhost:2923/api/
enumerated over each one individually. In order to represent products/getallproducts”,
these gaps between segments, you must use additional data “2KW1SfJOMkShcdeO54t1TA::
to track where each segment starts and ends. 05/12/2022 10:25:00 AM::
http://localhost:2923/api/
Discontiguous Buffers: orders/getallorders”,
“x5LmCTwMH0isd1wiA8gxIw::
ReadOnlySequence<T> 05/12/2022 11:05:00 AM::
Let’s assume that you’re working with a buffer that is not http://localhost:2923/api/
contiguous. For example, the data might be coming from orders/getallorders”,
a network stream, database call, or file stream. Each of “7IftPSBfCESNh4LD9yI6aw::
these scenarios can have multiple buffers of varying sizes. A 05/12/2022 11:40:00 AM::
single ReadOnlySequence instance can contain one or more http://localhost:2923/api/
segments of memory and each segment can have its own products/getallproducts”
Memory instance. Therefore, a single ReadOnlySequence };

24 Writing High-Performance Code Using Span<T> and Memory<T> in C# codemag.com


Remember, you can have millions of log records, so BenchmarkPerformance with the code in Listing 1. You
performance is critical. This example is just an extract of log should note how data has been set up in the GlobalSetup
data from massive volumes of log data. The data for each of method and the usage of the GlobalSetup attribute.
the rows comprise of the HTTP Request ID, the DateTime of
the HTTP Request, and the endpoint URL. Now suppose you Now, write the two methods named Substring and Span,
need to extract the request ID and the endpoint URL from as shown in Listing 2. While the former retrieves using
this data. the last country name with the Substring method of the
String class, the latter extracts the last country name
You need a solution that’s high performant. If you use the using the Slice method. The complete source code of the
Substring method of the string class, many string objects BenchmarkPerformance class is provided for your reference
will be created and it would degrade the performance in Listing 3.
of your application as well. The best solution is to use a
Span<T> here to avoid allocations. The solution to this is Executing the Benchmarks
using Span<T> and the Slice method as illustrated in the Write the following piece of code in the Program.cs file to
next section. run the benchmarks:

using HighPerformanceCodeDemo;
Benchmarking Performance using System.Runtime.InteropServices;
It’s time for some measurements. Let’s now benchmark the
performance of Span<T> struct versus the Substring method class Program
of the String class. {
static void Main(string[] args)
Create a New Console Application Project in {
Visual Studio 2022
Let’s create a console application project that you’ll use
for benchmarking performance. You can create a project in Listing 1: Setting up the benchmark data
Visual Studio 2022 in several ways. When you launch Visual
[MemoryDiagnoser]
Studio 2022, you’ll see the Start window. You can choose [Orderer(BenchmarkDotNet.Order.
Continue without code to launch the main screen of the SummaryOrderPolicy.FastestToSlowest)]
Visual Studio 2022 IDE. [RankColumn]

public class BenchmarkPerformance


To create a new Console Application Project in Visual Studio {
2022: [Params(100, 200)]
public int N;
1. Start the Visual Studio 2022 IDE. string countries = null;
1. In the Create a new project window, select Console int index, numberOfCharactersToExtract;
App, and click Next to move on.
2. Specify the project name as HighPerformanceCodeDemo [GlobalSetup]
public void GlobalSetup()
and the path where it should be created in the {
Configure your new project window. countries = “India, USA, UK,
3. If you want the solution file and project to be created in Australia, Netherlands, Belgium”;
index = countries.LastIndexOf(“,”,
the same directory, you can optionally check the Place StringComparison.Ordinal);
solution and project in the same directory checkbox. numberOfCharactersToExtract =
Click Next to move on. countries.Length - index;
4. In the next screen, specify the target framework you }
}
would like to use for your console application.
5. Click Create to complete the process.

You’ll use this application in the subsequent sections of this Listing 2: The Substring and Span Methods
article. [Benchmark]
public void Substring()
Install NuGet Package(s) {
for(int i = 0; i < N; i++)
So far so good. The next step is to install the necessary {
NuGet Package(s). To install the required packages into your var data = countries.Substring
project, right-click on the solution and the select Manage (index + 1,
numberOfCharactersToExtract - 1);
NuGet Packages for Solution.... Now search for the package }
named BenchmarkDotNet in the search box and install it. }
Alternatively, you can type the commands shown below at
the NuGet Package Manager Command Prompt: [Benchmark(Baseline = true)]
public void Span()
{
PM> Install-Package BenchmarkDotNet for(int i=0; i < N; i++)
{
var data = countries.AsSpan().
Slice(index + 1,
Benchmarking Span<T> Performance numberOfCharactersToExtract - 1);
Let’s now examine how to benchmark the performance of }
Substring and Slice methods. Create a new class named }

codemag.com Writing High-Performance Code Using Span<T> and Memory<T> in C# 25


Listing 3: The Complete Source Code
[MemoryDiagnoser] [Benchmark]
[Orderer(BenchmarkDotNet.Order. public void Substring()
SummaryOrderPolicy.FastestToSlowest)] {
[RankColumn] for(int i = 0; i < N; i++)
{
public class BenchmarkPerformance var data = countries.Substring
{ (index + 1,
[Params(100, 200)] numberOfCharactersToExtract - 1);
public int N; }
}
string countries = null;
int index, numberOfCharactersToExtract; [Benchmark(Baseline = true)]
public void Span()
[GlobalSetup] {
public void GlobalSetup() for(int i=0; i < N; i++)
{ {
countries = “India, USA, UK, var data = countries.
Australia, Netherlands, Belgium”; AsSpan().Slice(index + 1,
index = countries.LastIndexOf(“,”, numberOfCharactersToExtract - 1);
StringComparison.Ordinal); }
numberOfCharactersToExtract = }
countries.Length - index; }
}

Interpreting the Benchmarking Results


As you can see in Figure 3, there’s absolutely no allocation
when you’re using the Slice method to extract the string. For
each of the benchmarked methods, a row of the result data is
generated. Because there are two benchmark methods, there
are two rows of benchmark result data. The benchmark results
show the mean execution time, Gen0 collections, and the
allocated memory. As is evident from the benchmark results,
Span is more than 7.5 times faster than the Substring method.

Limitations of Span<T>
Span<T> is stack-only, which means it’s inappropriate for
storing references to buffers on the heap, as in routines
performing asynchronous calls. It’s not allocated in the
managed heap but on the stack and it doesn’t support boxing
to prevent promotion to the managed heap. You can’t use
Span<T> as a generic type but you can use it as a field type
in a ref struct. You can’t assign Span<T> to variables of type
dynamic, object, or any other interface type. You can’t use
Span<T> as fields in a reference type, nor can you use it across
await and yield boundaries. Additionally, because Span<T>
doesn’t inherit IEnumerable, you can’t use LINQ with it.

It’s important to note that you can’t have a Span<T> field


Figure 3: Benchmarking Span (Slice) vs Substring Performance in a class, create an array of a Span<T>, or box a Span<T>
instance. Note that neither Span<T> nor Memory<T> implement
IEnumerable<T>. So, you wouldn’t be able to use LINQ operations
BenchmarkRunner.Run with either of these. However, you can take advantage of
<BenchmarkPerformance>(); SpanLinq or NetFabric.Hyperlinq to get around this limitation.
}
} Conclusion
In this article I examined the features and benefits of
To execute the benchmarks, set the compile mode of the Span<T> and Memory<T> and how you can implement them
project to Release and run the following command in the in your applications. I also discussed a real-life scenario
same folder where your project file resides: where Span<T> can be used to improve string handling
performance. Note that Span<T> is more versatile and
dotnet run -p better performant than Memory<T> but it isn’t a complete
HighPerformanceCodeDemo.csproj -c Release replacement of it.

Figure 3 shows the result of the execution of the  Joydip Kanjilal


benchmarks. 

26 Writing High-Performance Code Using Span<T> and Memory<T> in C# codemag.com


ONLINE QUICK ID 2207041

Building MVC Applications


in PHP Laravel: Part 2
In part one of this series (https://codemag.com/Article/2205071/Building-MVC-Applications-in-PHP-Laravel-Part-1), I reviewed
the MVC architecture concepts. I also covered how PHP Laravel implements the Model section of MVC. In this article, I’ll cover
the Controllers and Views in MVC and how PHP Laravel implements them.

Controllers
C is for Controllers in the acronym MVC. The main purpose Facades make it easy to group
of a controller is to handle an incoming request and conse- together and access features
quently return a proper response. Often forgotten (or hid-
den) is the Routing mechanism in MVC applications. It’s the from a single endpoint.
router engine that decides which controller will handle a
new incoming request.
Resource Controllers
A Laravel project ships with routing files representing routes In Laravel, Resource controllers stem from the general REST
for different aspects of the application. Two important route concepts. REST APIs are typically based on HTTP methods Bilal Haidar
types exist in Laravel: (GET, POST, PUT, etc.) to access a resource via a URL and the bhaidar@gmail.com
use of JSON or XML to transmit data. https://www.bhaidar.dev
• Web routes: Web routes serve the public and protect- @bhaidar
ed routes of an application. In Laravel, you can think of a Model as a Resource. You want
Bilal Haidar is an
• API routes: API routes serve the public and protected to create a controller to handle model creation, deletion, accomplished author,
routes of an API application in Laravel. Yes! Not only reading, updating, etc. You want to group all those model- Microsoft MVP of 10 years,
can you build a back-end application with PHP Laravel, related functions into a single controller class. ASP.NET Insider, and has
but you can also build a REST API in Laravel. You can been writing for CODE
read a complete guide on how to build a RESTful API in Laravel can generate a Resource Controller for a single Magazine since 2007.
Laravel here: https://www.toptal.com/laravel/restful- model. It scaffolds a set of controller actions to manage
laravel-api-tutorial. this model. With 15 years of extensive
experience in Web develop-
You aren’t limited to these built-in route types. You can even Let’s create your first resource controller by making use of ment, Bilal is an expert in
define your own. For instance, let’s say that you want to the make:controller Artisan command: providing enterprise Web
build a section of an application that only admin(s) users solutions.
can access. Something like an Admin Panel for your main sail artisan make:controller \
He works at Consolidated
application. You can create a new route file and put all ad- PostController \
Contractors Company in
min routes in that file. Here’s a complete guide on creating --resource \
Athens, Greece as a full-
custom route files in Laravel: https://onlinewebtutorblog. --requests \
stack senior developer.
com/how-to-create-custom-route-file-in-laravel-8/. --model=Post
Bilal offers technical
When you create a new Laravel application, you’ll have a Or consultancy for a variety
single route defined inside the routes/web.php file. of technologies including
php artisan make:controller \ Nest JS, Angular, Vue JS,
<?php PostController \ JavaScript and TypeScript.
use Illuminate\Support\Facades\Route; --resource \
Route::get(‘/’, function () { --requests \
return view(‘welcome’); --model=Post
});
Let’s understand the command sections:
This basic route in Laravel defines a request handler, in the
form of closure (function), for accessing the root path of the • Specify the --resource flag to instruct Laravel to scaf-
application. The closure simply returns a view named welcome. fold a Resource Controller.
• Specify the --requests flag to instruct Laravel to gener-
Views will be discussed in the next section of the article. ate Form Requests for the store() and update() actions.
For now, it’s enough to understand that accessing the root • Specify the --model option to specify the resource/
path of the application returns an HTML page that’s defined model you want to create the resource controller for.
inside the welcome route.

Notice how I’m using a static get() function defined on a


Facade class called Route. Facades are heavily used in Lara-
In Laravel, there’s an Artisan
vel. You can read more about them here: https://laravel. Command for everything,
com/docs/9.x/facades#main-content.

codemag.com Building MVC Applications in PHP Laravel: Part 2 27


Verb URI Action Route Name A Resource Controller scaffolds a set of seven actions to
manage a resource/model, as shown in Table 1.
GET /posts index posts.index
GET /posts/create create posts.create Locate the routes/web.php file and let’s add a new resource
POST /posts store posts.store route as follows:
GET /posts/{post} show posts.show
Route::resource(‘posts’, PostController::class);
GET /posts/{post}/edit edit posts.edit
PUT/PATCH /posts/{post} update posts.update The resource() method is syntactical sugar. Under the hood,
DELETE /posts/{post} destroy posts.destroy it generates the seven routes/actions listed in Table 1.

Table 1: Resource routes and details To view the routes generated, run the following command:

sail artisan route:list --except-vendor

Or

php artisan route:list --except-vendor

The command generates the output shown in Figure 1.

Ignore the first two lines. Just focus on the routes that are
now defined for the Post resource/model.

Now that you understand the Post resource routes, let’s


switch to the PostController class and study its actions. The
command generates the PostController.php file inside the
path /app/HTTP/Controllers folder. Listing 1 shows the en-
Figure 1: Listing routes tire source for the PostController class.

Listing 1: PostController class


<?php *
* @param \App\Models\Post $post
namespace App\Http\Controllers; * @return \Illuminate\Http\Response
*/
use App\Http\Requests\StorePostRequest; public function show(Post $post)
use App\Http\Requests\UpdatePostRequest; {
use App\Models\Post; //
}
class PostController extends Controller
{ /**
/** * Show the form for editing the specified resource.
* Display a listing of the resource. *
* * @param \App\Models\Post $post
* @return \Illuminate\Http\Response * @return \Illuminate\Http\Response
*/ */ public function edit(Post $post)
public function index() {
{ //
// }
}
/**
/** * Update the specified resource in storage.
* Show the form for creating a new resource. *
* * @param \App\Http\Requests\UpdatePostRequest $request
* @return \Illuminate\Http\Response * @param \App\Models\Post $post
*/ * @return \Illuminate\Http\Response
public function create() */
{ public function update(UpdatePostRequest $request, Post $post)
// {
} //
}
/**
* Store a newly created resource in storage. /**
* * Remove the specified resource from storage.
* @param \App\Http\Requests\StorePostRequest $request *
* @return \Illuminate\Http\Response * @param \App\Models\Post $post
*/ * @return \Illuminate\Http\Response
public function store(StorePostRequest $request) */
{ public function destroy(Post $post)
// {
} //
}
/** }
* Display the specified resource.

28 Building MVC Applications in PHP Laravel: Part 2 codemag.com


The action methods are well documented. I’ll implement inside the controller action method. Listing 3 shows the
their logic along the way. For example, if you request: inline version of validation rules.

• GET /posts: The PostController->index() action meth- Form Request promotes encapsulation and reusability all the way!
od handles this request.
• POST /posts: The PostController->store() action Controller Middleware
method handles this request. Middleware in Laravel provides a convenient mechanism for
• DELETE /posts/1: The PostController->destroy() ac- inspecting and filtering HTTP requests entering your appli-
tion method handles this request. cation. For instance, you can define a middleware to verify
that the user is authenticated before accessing a controller.
In addition to creating resource controllers in Laravel, you You can read the full documentation on Laravel Middlewares
can also create normal controller classes and invokable con- here: https://laravel.com/docs/9.x/middleware.
trollers.
Laravel ships with a few middleware out of the box. You
Form Requests aren’t limited to these and can create your own.
Laravel provides a validation library to help you validate
all incoming requests. There are multiple ways to use this For example, you can specify that only authenticated users
library. You can read more about the Laravel validation li- can access and use the PostController. You can specify the
brary here: https://laravel.com/docs/9.x/validation. auth middleware inside the controller itself (https://lara-
vel.com/docs/5.8/authentication), as shown in Listing 4.
One way of using the validation library in Laravel is to make
use of Form Requests. Let’s revisit the store() controller
action method. Listing 2: StorePostRequest form request class
<?php
public function store(StorePostRequest $request)
{ } namespace App\Http\Requests;

use Illuminate\Foundation\Http\FormRequest;
The Form Request for this action method is named Store-
PostRequest. It’s a class defined inside the path /app/ class StorePostRequest extends FormRequest
HTTP/Requests folder. {
/**
* Determine if the user is authorized to
Listing 2 shows the entire source code of this class. * make this request.
*
You can always ignore the authorize() method and return * @return bool
true instead of false value. */
public function authorize()
{
The rules() method lets you define the validation rules that return false;
apply when creating a new Post model. }

The validation library in Laravel is rich in validation rules. You /**


* Get the validation rules that apply to the request.
can check the available list of validation rules here: https://
laravel.com/docs/9.x/validation#available-validation-rules. * @return array
*/
Let’s define some validation rules for creating a Post model. public function rules()
{
return [
public function rules() //
{ ];
return [ }
}
‘title’ => ‘required|unique:posts|max:255’,
‘body’ => ‘required|max:500’,
‘published’ => ‘sometimes|date’
]; Listing 3: Inline validation rules
}
/**
* Store a newly created resource in storage.
The Post Title should be required, be unique across all the Post *
records in the database and have a maximum of 255 charac- * @param \Illuminate\Http\Request $request
ters. Similarly, the Post Body should be required. This is just * @return \Illuminate\Http\Response
an example; you’re welcome to be as creative as you want. */
public function store(Request $request)
{
Before executing the store() action method, Laravel runs $validated = $request->validate([
the validation rules on the incoming request. If validation ‘title’ => ‘required|unique:posts|max:255’,
passes, Laravel executes the store() method; otherwise, ‘body’ => ‘required|max:500’,
‘published’ => ‘sometimes|date’
Laravel returns an error response. ]);

Another option to employ validations in your controllers // The post is valid...


without the use of Form Request is to use inline validation }

codemag.com Building MVC Applications in PHP Laravel: Part 2 29


Listing 4: Specify auth middleware inside the controller In this case, Laravel notices a match between the route seg-
ment name and the second variable name passed into the
class PostController extends Controller
{
update() action method. It retrieves the value of the post
route parameter, executes a database query to retrieve the
/** corresponding Post model, and finally feeds the model into
* Instantiate a new controller instance. the update() method.
*
* @return void
*/ You can read more about Route Model Binding here: https://
public function __construct() laravel.com/docs/9.x/routing#route-model-binding.
{
$this->middleware(‘auth’);
} Simple Controllers
In addition to defining resource controllers, you can still
// … define normal and invokable controllers. To create a normal
}
control, run the following command:

sail artisan make:controller PostController


Specifying the auth middleware inside the controller means
that all the controller action methods will require the user Or
to be authenticated before they can handle requests.
php artisan make:controller PostController
You can exclude some actions and make them publicly acces-
sible by using the except() method as follows: Then, to define a route using this new controller, switch to
routes/web.php file and add the following route:
$this->middleware('auth')->except(index');
Route::get(‘/posts’, [
Another way of specifying a middleware is to define it on PostContoller::class,
the routes instead. ‘index’
]);
Route::resource(posts, PostController::class)
->except([ ‘index’ ]); You can even mix both resource routes and normal routes
inside the routes/web.php file. For example:
The Route facade has a set of rich methods that you can
use to customize the routes. For instance, you can learn Route::get(‘/post/print’, [
about Route Groups here: https://laravel.com/docs/9.x/ PostController::class,
routing#route-groups. ‘printPost’
]);
Implicit Route Model Binding Route::resource(‘post’, ‘PostController’);
Implicit route model binding is a smart feature in Laravel
that can resolve Eloquent models defined in routes or con- Make sure to add any custom routes before you register the
troller actions and whose type-hinted variable names match resource route. You can read more about supplementing
a route segment name. resource controllers here: https://laravel.com/docs/9.x/
controllers#restful-supplementing-resource-controllers.
Let’s look at the update() action method:
Dependency Injection and Controllers
public function update( Laravel comes with a built-in Inversion-of-Control (IoC)
UpdatePostRequest $request, Service Container that manages class dependencies and
Post $post) performs dependency injection. You can read about de-
{ pendency injection here: https://www.freecodecamp.org/
// news/a-quick-intro-to-dependency-injection-what-it-is-
} and-when-to-use-it-7578c84fa88f/.

In general, it’s recommended to keep your controller thin


and minimal while encapsulating additional functionality
Laravel, by default, assumes the in Services or Actions. You can read more about Services
route parameter corresponds here: https://farhan.dev/tutorial/laravel-service-classes-
explained/. Also, you can read more about implement-
to the ID of the model. You can ing Actions here: https://freek.dev/1371-refactoring-to-
customize the key the way you actions.
want (https://laravel.com/docs/9.x/ Instead of placing all the logic inside the controller class
routing#customizing-the-key). itself, you can refactor all this additional code into their
own Service class or Action class.

To update a Post record, you execute a PUT/PATCH /posts/ Laravel documentation contains tons more information,
{post} request. Typically, the {post} route segment is re- such as Laravel Controllers, Service Providers, and Service
placed by the Post ID field. Container.

30 Building MVC Applications in PHP Laravel: Part 2 codemag.com


Figure 2: Laravel 9 running inside a browser

Now that you know how to create a controller and how a Let’s run the application and see the view returned inside the
controller handles requests, it’s time to cover the last letter browser. Figure 2 shows the app running inside the browser.
of MVC, which in this case, is the letter V, representing Views
or HTML pages in your application. The welcome.blade.php is just an HTML page with PHP snip-
pets here and there. The Blade engine has rich components
and directives that you can review here: https://laravel.
Views com/docs/9.x/blade#components and here: https://lara-
Views represent the User Interface (UI) of an MVC applica- vel.com/docs/9.x/blade#blade-directives.
tion. Typically, they render as HTML/CSS pages for browsers.
Layout Template
Let’s revisit the routes/web.php file and study the first Most of the applications you build have a general theme
route added by Laravel. across all the pages. For instance, you might have a naviga-
tion bar at the top, a sidebar, and a footer. The main content
<?php area is dynamic and changes from one page to another. In-
use Illuminate\Support\Facades\Route; stead of repeating the same layout over and over each page,
Route::get(‘/’, function () { you put all the common components inside one file and then
return view(‘welcome’); reuse them across the rest of the pages in the application.
});
The Blade Engine allows building Layouts (https://laravel.
This route maps any request to the root path of the applica- com/docs/9.x/blade#building-layouts). A Layout template
tion and, in response, returns the View named welcome. holds all the common sections and defines placeholders to
Laravel defines and stores all views inside the /resources/ be later populated by the View extending this Layout View.
views folder. Locate this folder and notice that this folder
currently contains a single View, the welcome.blade.php Create a new blade file named /resources/views/Layout.
file. blade.php. I’ve created a GitHub Gist (https://is.gd/09RS9m)
that you can use to copy/paste the source code from.
Laravel makes use of a simple, yet powerful templating engine
named Blade (https://laravel.com/docs/9.x/blade). Unlike The template is a basic HTML5 page with two placeholders:
other PHP templating engines, Blade templates are all com-
piled into plain PHP code and cached until they are modified. • Title
• Content
Think of Blade as the ASP.NET Razor (https://www.w3schools.
com/asp/razor_intro.asp), or Django DTL or Jinja 2 (https:// The Views that extend this Layout template can substitute
docs.djangoproject.com/en/4.0/topics/templates/). values for both the title and content.

codemag.com Building MVC Applications in PHP Laravel: Part 2 31


Figure 3: Index view

Notice how the Layout template is embedding the JavaScript The action method is straightforward. First, you query for
and CSS files. This is handled by Laravel Mix (https://lara- all Post records in the database. Then, you use the view()
vel.com/docs/9.x/mix#main-content). helper function to return the Index View with some data to
render inside the view. In case you are using the Laravel
Now let’s create the applications views to handle creating, standard way of defining views, then the view name is noth-
reading, updating, and destroying of Post records. ing but a concatenation of the folder name and the first part
of the view name with a dot in between. In case you’re nest-
Index View ing your view in a more deep and nested folder structure,
Let’s start by defining the Index View. This view displays all you’ve got to include all the subfolders in the path.
Post records that are stored in the database. On this view,
you will be able to create a new Post, edit existing ones, Let’s now highlight the important sections of the Index
and of course, destroy one. Figure 3 shows the view up and View.
running.
I want this view to extend the Layout template. I do this by
Create a new Blade view and store it under the path /re- adding the following statement:
sources/views/post/index.blade.php. I’ve created a
GitHub Gist (https://is.gd/nIlHKi) that you can use to copy/ @extends(‘layout’)
paste the source code from.
Then, I want to provide a value for the Title section that
Back in Table 1, I compiled the list of all Post routes and I’ve defined in the Layout template. I do this by adding the
actions. In order to access this view, you need to implement following statement:
the PostController index() view.
@section(‘title’, ‘All Posts’)
Switch to the PostController and write the following:
To render this view inside the Content section of the Layout
public function index() template, I wrap all my HTML markup inside the following
{ statement:
$posts = Post::all();
return view(‘post.index’, compact(‘posts’)); @section('content')
} @endsection

32 Building MVC Applications in PHP Laravel: Part 2 codemag.com


The rest is just HTML and a bit of Tailwind CSS (https:// <td>{{ $post->body }}</td>
tailwindcss.com/). <td>{{ $post->published_at }}</td>
</tr>
I will highlight the most important parts of the HTML that @endforeach
are purely related to Laravel Blade.
The @foreach() is one of many other useful functions that
The Create Post button is defined as follows (with CSS omit- Blade offers. It loops over the collection of Post records,
ted): and for each record, it renders a single table row. Every
$post represents a single Post Model object. Hence, you can
<a href="{{ route("posts.create") }}"> access all properties on this Model inside Blade.
Create Post
</a> Create Post View
Create a new Blade view and store it under the path /re-
First of all, notice the {{ and }}. Those are tags you use to sources/views/post/create.blade.php. I’ve created a
embed dynamic data or access PHP code. GitHub Gist (https://is.gd/fkEGEa) that you can use to
copy/paste the source code from. Figure 4 shows the view
Laravel defines the route() helper function. Instead of typ- up and running.
ing the entire URL of the endpoint, use this function. This
approach has many advantages. One advantage I can quickly The view wraps an HTML 5 Form to allow the user to create
think of is that when you change the actual URL of an end- a new Post record. Listing 5 shows a simplified version of
point, the route name always refers to whatever the path is. the form.

The route() function takes the route name as input. Table 1 con- The form posts to the route named posts.store. It submits a
tains all route information that you’ve created for Post Model. POST request to the endpoint /posts with the Post fields in
the payload of the request.
When the view renders in the browser, the helper function
will be replaced with this: The form uses the @csrf Blade directive. Laravel offers a
layer of protection against cross-site request forgery. You
<a href=”/posts/create”>Create Post</a> need to always embed this directive when using forms in
Blade. You can read more about CSRF Protection in Laravel
The next thing to cover is looping over the Post record that here: https://laravel.com/docs/9.x/csrf#main-content.
was retrieved from the database and sent down by the in-
dex() action method. I’m using the old() helper function to preserve the value
that the user enters. For instance, let’s say that I fill out
@foreach($posts as $post) the form and leave a required field empty. When the form
<tr> is submitted, Laravel validates the input fields. If any field
<td>{{ $post->title }}</td> is not valid, Laravel redirects to the same view. Hence, the
<td>{{ $post->slug }}</td> page refreshes in its place. The old() function comes in

Figure 4: Create the Post view

codemag.com Building MVC Applications in PHP Laravel: Part 2 33


Listing 5: Create post view Model Binding section of this article. It accepts as input an as-
sociative array of all fields that belong to the $fillable array.
<form method=”post” action=”{{ route(‘posts.store’) }}”>
@csrf
<input type=”text” name=”title” After creating the record, it redirects the user to the posts.
value=”{{ old(‘title’) }}” /> index route. It uses a newly introduced helper function in
Laravel 9, the to_route() function. This function redirects
<textarea name=”body”>{{ old(‘body’) }}</textarea>
to a route. In addition, you pass a success message back to
<input type=”date” name=”published_at” the Index View.
value=”{{ old(‘published_at’) }}” />

<button type=”submit”>Save</button> Let’s switch back to the Index View and see how and where
</form> the success message is displayed.

@if (session(‘success’))
Listing 6: Render error messages <div
class=”alert alert-success mb-2”
@if ($errors->any()) role=”alert”
<div class=”alert alert-danger mb-4” role=”alert”>
<ul> >
@foreach ($errors->all() as $error) {{ session(‘success’) }}
<li>{{ $error }}</li> </div>
@endforeach
</ul> @endif
</div>
@endif This is the section that renders the success message and is
placed above the table. Figure 5 shows where the message
renders.
handy in preserving whatever I’ve typed before submitting
the form. In case validation fails, Laravel renders the same view with
an additional piece of data, named $errors, representing a
The PostController create() action method handles render- collection of errors resulting from the validation. To render
ing this view. those errors, you need to add this section on top of the
Create Post form. Listing 6 shows the entire code for this
public function create() section.
{
return view(‘post.create’); The code loops over all error messages and displays them
} inside an unordered list.

The PostController store() action method handles the form Edit Post View
submission. Create a new Blade view and store it under the path /resourc-
es/views/post/edit.blade.php. I’ve created a GitHub Gist
public function store(StorePostRequest $request) (https://is.gd/fkEGEa) that you can use to copy/paste the
{ source code from. Figure 6 shows the view up and running.
$post = Post::create([
...$request->validated(), This view is similar to the Create View. In this case, the view
‘user_id’ => 1 renders an existing Post record in edit mode.
]);
return to_route(‘posts.index’) To reach this view, you need to adjust the Index View and
->with( wrap the edit icon with a hyperlink to the Edit View URL.
‘success’,
‘Post is successfully saved’ <a href=”{{ route(‘posts.edit’, $post->id) }}”>
); </a>
}
The posts.edit route name corresponds to /posts/{post-
Notice how the store() action method is making use of id}/edit/. This route corresponds to the PostController
Laravel Implicit Route Binding to receive a Post object in- edit() action method.
stead of Post ID.
public function edit(Post $post)
The store() method has the StorePostRequest Form Request {
as a single input parameter. This class handles the valida- return view('post.edit', compact('post'));
tion of the incoming request. It then provides the store() }
method with valid data that is accessible via the $request-
validated() method. Notice how the edit() action method is making use of Lara-
vel Implicit Route Binding to receive a Post object instead
I’ve created a GitHub Gist (https://is.gd/UOY79V) that you of Post ID. The method returns the Edit View together with
can use to copy/paste the source code from. the $post variable containing the Post object.

The code uses the Post::create() method that represents the The view binds the Post properties to the various HTML con-
mass assignment in Eloquent, as discussed in the Implicit Route trols, as you’ve seen in the Create Post View section.

34 Building MVC Applications in PHP Laravel: Part 2 codemag.com


Figure 5: Success message

The form uses a new Blade directive, the @method() directive.


return to_route(‘posts.index’)
<form method=”post” ->with(
action=”{{ route(‘posts.update’, $post->id) }}” ‘success’,
> ‘Post is successfully saved’
@csrf );
@method(‘PATCH’) }

</form> Notice how the update() action method uses Laravel Implicit
Route Binding to receive a Post object instead of Post ID.
Remember from Table 1 that Laravel defines the edit route
to have a method of PUT/PATCH. The update() method has the UpdatePostRequest Form Re-
quest as a single input parameter. This class handles the
HTML forms do not support PUT, PATCH, or DELETE actions. validation of the incoming request. It then provides the
Therefore, when defining PUT, PATCH, or DELETE routes that are update() method with valid data that’s accessible via the
called from an HTML form, you’ll need to add a hidden _meth- $request-validated() method.
od field to the form. The value sent with the _method field will
be used as the HTTP request method. Instead of adding the I’ve created a GitHub Gist (https://is.gd/ccvPip) that you
hidden field yourself, Laravel offers the @method() directive. can use to copy/paste the source code from. The code uses
the Post instance method update() method to update the
When the user submits the form, Laravel once again vali- record in the database. It accepts as input an associative
dates the input fields. Based on the result of validation, array of all fields that belong to the $fillable array.
it either redirects the user back to the view to correct any
missing or wrong data, or updates the record in the data- After creating the record, it redirects the user to the posts.in-
base and redirects back to the Index View. dex route. It uses a newly introduced helper function in Laravel
9, the to_route() function. This function redirects to a route.
public function update( In addition, you pass a success message back to the Index View.
UpdatePostRequest $request,
Post $post) Destroy Post View
{ I won’t create a separate view for this section. Instead, I’ll
$post->update($request->validated()); adjust the Index View to include a button to allow the de-

codemag.com Building MVC Applications in PHP Laravel: Part 2 35


Figure 6: Edit the Post view

Listing 7: Destroy view ->with(


<form ‘success’,
method=”post” ‘Post is successfully deleted’
action=”{{ route(‘posts.destroy’, $post->id) }}” );
onsubmit=”return confirm(‘Do you really want to ...’);” }
>
@csrf()
@method(‘DELETE’) The action method deletes the Post record and then redi-
rects the user to the Index View once again.
<button type=”submit”></button>
</form>
Notice how the destroy() action method is making use of
Laravel Implicit Route Binding to receive a Post object in-
stead of Post ID.
stroy (delete) a Post record. Figure 7 shows the view up
and running. Clean Up
When creating or saving a Post record, I want to keep the
Switch to the Index View and add a small form to allow the slug field up to date with the current Title field.
user to delete the Post record. Listing 7 shows the entire
code for this section. There are plenty of ways to implement this feature. I
opt for Model Events (https://laravel.com/docs/9.x/
When the user clicks the button to destroy a single Post eloquent#events).
record, the form shows a confirmation message to verify the
user’s request before executing the action. You can hook into the different Model events and perform
some actions. In this case, you’ll hook into the saving event
The form submits to the route named posts.destroy, which, to update the slug field value the first time the Post record
according to Table 1, corresponds to the PostController de- is created and every time it’s saved.
stroy() action method.
Locate the /app/Providers/AppServiceProvider.php file
Once again, you’re adding the _method hidden field to and add this code inside the boot() method.
specify the DELETE as the form method name.
public function boot()
public function destroy(Post $post) {
{ Post::saving(static function ($post) {
$post->delete(); $post->slug = \Str::slug($post->title);
});
return to_route(‘posts.index’) }

36 Building MVC Applications in PHP Laravel: Part 2 codemag.com


Figure 7: Destroy post record

When the Post record is created or updated, the static clo- Laravel application: https://tailwindcss.com/docs/guides/
sure runs and updates the Post slug field. Laravel defines laravel.
many Model events that you can use as per the scenario at
hand. I’ve borrowed the HTML markup for my Views from the fol-
lowing resource: https://larainfo.com/blogs/tailwind-css-
You could have defined the same code block on the Post simple-post-crud-ui-example. They’ve got many posts and
Model. However, I find it cleaner to use the provider for such material on using Tailwind CSS in Laravel.
tasks. In Laravel, Service providers are the central place of all
Laravel application bootstrapping. It defines two methods: In this article, I covered using Laravel Blade as part of
building MVC applications in Laravel. However, Laravel al-
• register() lows you to use other client-side rendering engines and isn’t
• boot() limited to Blade only.

You use the register() method when you want to register For instance, you can build a Laravel application for the
classes with the Laravel Service Container. Right after run- back-end with React JS, Vue JS, Svelte JS, Livewire, and In-
ning all register() methods in the application, Laravel runs ertia JS for the front-end.
the boot() method. At this stage, you are sure that all class-
es have been registered inside the Service container and it’s Now that you know how PHP Laravel implements the MVC
safe to access. architecture, you can start building your own applications
with Laravel. What you have seen here is just the tip of the
You can read more about Laravel Service providers here: iceberg. My plan is to use this article and the application
https://laravel.com/docs/9.x/providers. that we have built together as a basis to build and add more
Laravel features very soon.
Summary
I’ve been using Tailwind CSS throughout the different views Stay tuned to discover more with PHP Laravel.
that I’ve implemented. If you’re interested in using Tailwind
CSS in Laravel, I advise checking this simple guide that will  Bilal Haidar
teach you how to install and configure Tailwind CSS in your 

codemag.com Building MVC Applications in PHP Laravel: Part 2 37


ONLINE QUICK ID 2207051

TypeScript: An Introduction
Writing JavaScript can be difficult. The speed and simplicity of development using a weakly typed language like JavaScript has
its merits, but, as whole books have attested, JavaScript isn’t a perfect language. Yet JavaScript powers so much of the code we
write these days, whether it’s server-side (node or deno), browsers, or even application development. As JavaScript projects

grow, the weak typing can get in your way. Let’s talk about To do this, download the installer at https://shawnl.ink/
TypeScript as a solution to some of the limitations of Ja- install-node.
vaScript.
Once you have Node/NPM installed, you can install the com-
I expect that there are three types of developers reading piler (globally in this case) with NPM:
this article. First, you JavaScript developers who want to
understand how TypeScript works: Welcome to the article. > npm install typescript -g
Second, for you developers from other languages that are
suspicious of JavaScript and wonder if TypeScript fixes ev- Now that TypeScript is installed, you can create your first
Shawn Wildermuth erything in JavaScript: I’m glad you’re here! For the third TypeScript file by naming it as {filename}.ts. The compiler
shawn@wildermuth.com type of developer who already uses TypeScript and wants command is tsc (TypeScript Compiler). To compile your first
wildermuth.com to find a mistake in this article: Please correct me if I get TypeScript file, call the compiler with the file name:
twitter.com/shawnwildermut anything wrong.
> tsc index.ts
Shawn Wildermuth has
been tinkering with com- Before You Start If the compilation is successful, it creates a new file called
puters and software since
he got a Vic-20 back in the Although it can be helpful to use the TypeScript playground index.js. For common editors, TypeScript is often enabled
early ’80s. As a Microsoft (https://shawnl.ink/ts-playground), you’ll likely want to set by default. For example, in Visual Studio Code, you’ll get
MVP since 2003, he’s also up your computer and code editors for TypeScript. Although IntelliSense and error checking without any extensions, as
involved with Microsoft many of you will use TypeScript in other frameworks and seen in Figure 1 and Figure 2.
as an ASP.NET Insider and libraries (such as Angular, Vue, and React), you might be
ClientDev Insider. He’s interested in learning how to use TypeScript directly. To get Visual Studio works in a similar way. By default, TypeScript
the author of over twenty started, TypeScript requires Node and NPM to be installed. is enabled, as seen in Figure 3.
Pluralsight courses, written
eight books, an interna- No extensions are required. It just works. The default be-
tional conference speaker, havior in Visual Studio is to compile TypeScript when you
and one of the Wilder Minds. compile your project. It can be overzealous and try to com-
You can reach him at his pile every TypeScript file in any libraries you’re using (e.g.,
blog at http://wildermuth. NPM’s node_modules). You may not be using Visual Studio
com. He’s also making to compile your TypeScript (e.g., you’re using the Type-
his first, feature-length Script compiler to turn your Type-
documentary about
Script into JavaScript on the
software developers
today called “Hello World:
The Film.” You can see
more about it at http://
helloworldfilm.com.

38 TypeScript: An Introduction codemag.com


command line), and you may need to turn off this automatic
compilation. To do this, you’ll need to edit the .csproj file
of your project and add the TypeScriptCompileBlocked ele-
ment to disable this feature:

<Project Sdk="...">

<PropertyGroup>
<TargetFramework>net6.0</TargetFramework
<Nullable>enable</Nullable> Figure 1: Error checking in Visual Studio code
<TypeScriptCompileBlocked>
true
</TypeScriptCompileBlocked>
...
</PropertyGroup>
...
</Project>

Now that you’re set up to use TypeScript, let’s dive into lan-
guage.

TypeScript and JavaScript


JavaScript is an interpreted language. That means that
it’s a text file that a JavaScript engine processes to make
it code at runtime. If you’ve used JavaScript, this may
seem obvious. To prevent common errors, JavaScript has
a handful of tools called linters that can be run to see
if any common issues are valid. This includes syntax er-
rors but also a configurable set of rules to look for. Lint-
ers are commonly used by the editors used to give hints Figure 2: IntelliSense working in TypeScript
while you write the code. By the time you’re ready to use
the code, you have some level of confidence that the code
will work, but it doesn’t always ensure that the code is
working.

When I say that JavaScript is weakly typed, I mean that a


variable isn’t tied to a specific type when created. For ex-
ample:

let a = "Shawn";
console.log(typeof a); // string
a = 1;
console.log(typeof a); // number

When the variable a is created, it’s a string only be-


cause I assign it a string. Once I assign it a number, it
becomes a number type. This extends to other parts
of the language. For example, function parameters
are all optional. So if you expect a parameter, you
need to check to see if it’s okay before you use it: Figure 3: TypeScript in Visual Studio

function writeName(name) {
// Could be empty, null, or undefined When you’re writing a small project, the benefits are likely to
if (name) { be small. But when you’re creating a large-scale JavaScript
console.log(name); project, this additional type checking can provide enormous
} benefits.
}

writeName("Bob"); What Is TypeScript?


writeName(); At its core, TypeScript is a static type checker. What that
means is that compiling TypeScript adds type checking to
It introduces a way to ensure that the assumptions in your the compilation step. It does this by adding a type system
code have the effect of improving the quality of your code. and other forward-looking features to JavaScript. The lan-

codemag.com TypeScript: An Introduction 39


guage looks a lot like JavaScript. In fact, TypeScript is just // The collection
a super-set of the JavaScript language. Although created const theList = new Array();
inside Microsoft, it isn’t (as I’ve heard some people think) // Function to add items
that it’s C# for the browser. That’s a whole other solution function addToCollection(item) {
(hint: it rhymes with “laser”).     theList.push(item);}

Instead, TypeScript is all about making JavaScript more scal- Use the typed functionaddToCollection("one");
able. Although creating languages that compile down to
JavaScript isn’t a new idea, TypeScript is different because addToCollection("two");
JavaScript is valid TypeScript. Because TypeScript is a super- addToCollection(3); // TypeScript Error
set of JavaScript, it only adds new features to JavaScript to
make it easier to manage large projects. Let’s see an example: Although the type safety functionality in TypeScript is in
the source file, none of it translates into the JavaScript. So
const element = why does it matter? The TypeScript benefits are about doing
document.getElementById("someObject"); type-checking at compilation time without trying to make
JavaScript into a strongly typed language. This is why it’s
element.addEventListener("click", so beneficial to large projects where the level of complexity
function () { can benefit from some level of compiler checking against
element.classList.toggle("selected"); the assumptions in your code.
});
Another benefit of TypeScript is that you can write your code
This is valid TypeScript. What does it output? without thinking about the JavaScript environment. Let’s
start with another small piece of TypeScript:
"use strict";
const element = // TypeScript
document.getElementById("someObject");
class Animal {
element.addEventListener("click", name = "";
function () { age = 0;
    element.classList.toggle("selected"); fed = false;
});
feed(food: string) {
Do you see the difference? Only the “use strict” is added. this.fed = true;
This simple piece of JavaScript is completely valid Type- }
Script. That’s the magic of the language. You can opt into as }
little or as much of the benefits of TypeScript as you want.
Let’s look at another example, this time, using type infor- In this case, you’re creating a class, which isn’t supported
mation in the TypeScript: in older versions of JavaScript. When you build TypeScript,
you can specify the target JavaScript level. For example, if
// TypeScript you build it for ECMAScript 2015, you’re using that version
of JavaScript classes:
// The collection
const theList = new Array<string>(); // ECMAScript 2015

// Function to add items "use strict";


function addToCollection(item: string) { class Animal {
theList.push(item);     constructor() {
}         this.name = "";
        this.age = 0;
// Use the typed function         this.fed = false;
addToCollection("one");     }
addToCollection("two");     feed(food) {
addToCollection(3); // TypeScript Error         this.fed = true;
    }
The generic argument in the Array specifies that the list }
should only accept strings. Then, in the function, you’re
specifying that the function should only take a string for But if you target older browsers (e.g., ECMAScript 5), you’d
the parameter. Not clear here is that the push function on get something quite different:
theList only accepts a string as well. When you use the func-
tion, the two calls with a string work fine, but the third // ECMAScript 5
shows an error during TypeScript compilation. How does this
affect the compiled output? "use strict";
var Animal = /** @class */ (function () {
// Resulting JavaScript     function Animal() {
        this.name = "";
"use strict";         this.age = 0;

40 TypeScript: An Introduction codemag.com


        this.fed = false; function write(message: string) {
    } // no need to test for data type
    Animal.prototype.feed = function (food) { console.log(message);
        this.fed = true; }
    };
    return Animal; Putting these together means that anyone using the write
}()); function gets an error if they attempt to call the write mes-
sage with anything other than a string. When creating ob-
You can see what TypeScript generates by using the Type- jects in TypeScript, the types are inferred too:
Script playground at https://shawnl.ink/ts-playground.
let person = {
firstName: "Shawn",
lastName: "Wildermuth",
You can see what TypeScript age: 53
};
generates by using the
TypeScript playground at person.age = "55"; // Won't Work
https://shawnl.ink/ts-playground
For arrays, the array type is inferred. For example, Type-
Script assumes that this array is an array of strings:
A Type System
At the end of the day, TypeScript can help you think less let people = [ "one", "two", "three" ]; for JavaScript?
about which level of JavaScript (e.g., ECMAScript) you’re
Microsoft has proposed
writing for and opt into features that are helpful to your people.push(1); // won't work
a change to JavaScript to
development environment. Enough talk! Let’s dig into the introduce this type system
features of TypeScript next. This is true of arrays of objects too. You can see this in a syntax to JavaScript itself.
tooltip with the type that was inferred in Figure 4. It’s very early in the process,
TypeScript Features Sometimes the types are more complex than you expect.
but you can read more about
that at https://shawnl.ink/
Now that you’ve learned the basic ideas behind TypeScript, Let’s talk about different type checking. ms-proposal-type-syntax.
let’s see how you’d use TypeScript in your own code. The low-
est hanging fruit of TypeScript is, of course, type checking. Complex Types
When you’re building JavaScript, sometimes it’s beneficial
Type Checking not to tie a type specifically. You might want to specify that
You can specify the type by annotating it like so: you support a variety of different data types. That’s where the
special TypeScript type called “any” comes in. You can specify
let theName: string = "Shawn"; that any type is possible with the any type. This essentially
tells the compiler that you need to use weak typing:
By using a colon and then the type name, you can tell Type-
Script that this variable should always be a string. If you let x: any = "Hello";
try to assign a non-string, it causes a TypeScript error. For x = 1; // works as any can change type
example, if you do this:
This is similar to using types in other scenarios (like func-
let theName: string = "Shawn"; tion parameters). When you want flexibility about what’s
theName = 1; used in a use case, you could use the any type there too:

The TypeScript compiler complains: function write(message: any) {


console.log(message);
D:\writing\CoDeMag\TypeScript\code>tsc index.ts }
index.ts:3:1 - error TS2322: Type 'number' is
not assignable to type 'string'. write("Foo");

3 theName = 1;
~~~~~~~
Found 1 error in index.ts:3

You don’t need to specify the type because TypeScript infers


the type from the assignment when you create a variable:

let theName = "Shawn";


// identical to
let theName: string = "Shawn";

In most cases, you only need to specify the type if you‘re Figure 4:
not assigning an initial value. This type information can be Inferring
useful in functions too: array types

codemag.com TypeScript: An Introduction 41


write(1); Of course, if your function doesn’t return anything, it’s in-
write(true); ferred as the void type. You can be explicit about this as well:

In this case, any type passed in as a message would work. // Using an ellipse for consiseness
But there are times when you want to constrain the types. function formatName(...) : void {
If you want to only allow numbers and strings, you might
write complex code to test the datatype with an any type, Using types for parameter types and return values should
but instead you could use union types. These are types that expose whether you’re using functions incorrectly.
have a pattern matching as the type checking. These types
are separated by the or operator (i.e, |): Now that I’ve talked about types in general, let’s talk about
defining your own types.
function write(message: string | number) {
console.log(message); Type Definitions
} The first way to define a type is with the type keyword. Here,
you can create a named type with a definition of what you ex-
write("Foo"); pect. For example, you could define a type for the Person type:
write(1);
write(true); // fails type Person = {
firstName: string,
Unlike other statically typed languages, this allows you to lastName: string,
What Isn’t TypeScript specify patterns for types that may not fit into the limita- age: number
tions of strongly typing. };
I think it’s important to
understand what TypeScript
is not. Although TypeScript Functions Once a type is defined, you can use it to type to define vari-
is a compiler to convert Because Functions are a central part of any JavaScript proj- ables:
code to JavaScript, that’s ect, you won’t be surprised that functions can contain type
the end of its responsibility. checking too. Let’s start with a simple JavaScript function: let people: Person[];
That means it doesn’t bundle
your projects, or do code function formatName(firstName, lastName) { This defines the variable people as accepting an array of
splitting, or anything that if (!firstName || !lastName) return null; type Person, but it only defines the type, not initialization.
bundlers (such as WebPack, return `${lastName}, ${firstName}`; For example, if you wanted to initialize it, you’d need to
Rollup, etc.) do. Unless you’re } assign a value:
using a bundler’s plug-in for
TypeScript, TypeScript is about let fullName = formatName("Shawn", "Wildermuth"); let people: Person[] = [];
generating JavaScript.
If you define this in TypeScript, it infers that the firstName Alternatively, you can use a generic declaration (much like
and lastName are any objects and that the function returns you might know from other languages):
a string. Although this is mostly correct, it’s not exactly cor-
rect. Let’s type those parameters: let people = new Array<Person>();

function formatName(firstName: string, In any of these cases, you’ll see that the TypeScript compiler
lastName: string) { is going to type check that the shape of the Person is as-
if (!firstName || !lastName) return null; signed to this array:
return `${lastName}, ${firstName}`;
} // Fails as this shape isn't the same as Person
people.push({
let fullName = formatName("Shawn", "Wildermuth"); fullName: "Shawn Wildermuth",
age: 53
That helps confirm that you have to send in strings for the });
names. But you’re still inferring the return type. Let’s add
the return type: // This Works
people.push({
function formatName(firstName: string, firstName: "Shawn",
lastName: string) : string { lastName: "Wildermuth",
if (!firstName || !lastName) return null; age: 53
return `${lastName}, ${firstName}`; });
}
Using type definitions simply defines the shape that’s re-
Note that the return type isn’t before the function but after quired. Note that you can still use an anonymous object
the function and after a colon. Although this return isn’t here, as long as it matches the type pattern.
needed (it’s inferred), you might know more than the Type-
Script inference. In this case, you know that you can return Let’s look at some other ways to define types.
a null or a string, so you can use a union type:
Classes
// Using an ellipse for consiseness If you’re coming from most major languages, you’re used
function formatName(...) : string | null { to using classes as the central part of your development

42 TypeScript: An Introduction codemag.com


pattern. In JavaScript, classes are a part of the modern Ja- reports a type error when you try to create an instance when
vaScript language, but unlike C#, Java, and others, classes a constructor is specifying a pattern of initialization:
represent another way to package code, but they aren’t re-
quired or central to JavaScript. In either case, TypeScript // Fails
supports class definitions with type checking. Although Ja- let person = new Person();
vaScript classes aren’t typed, TypeScript can provide typing: // Succeeds
let person = new Person("...", "...", 53);
class Person {
firstName: string; One shortcut to this type of construction is that if you spec-
lastName: string; ify a visibility in the constructor, it creates the fields for
age: number; you:

write() { class Person {


console.log(this.firstName +
" " + // No longer necessary
this.lastName); // firstName: string;
} // lastName: string;
} // age: number;

let person = new Person(); constructor(public firstName: string,


person.firstName = "Shawn"; public lastName: string, Source Code
person.lastName = "Wildermuth"; public age: number) {
The source code can be
person.write(); // this.firstName = firstName;
downloaded at https://github.
// this.lastName = lastName;
com/wilder-minds/code-
Unlike a type definition, classes represent more than just // this.age = age; typescript-article
data. They represent a runtime type as well: }

let isPerson = person instanceof Person; write() {


console.log(this.firstName +
Like other parts of JavaScript, TypeScript allows you to add " " +
type checking in the typical way you’d expect. Depending on this.lastName);
what level of JavaScript you’re compiling to, this is emitted }
as a JavaScript class or an object notation that was useful in }
ECMAScript 3. In either case, being able to define classes in
TypeScript allows you to use this newer feature of JavaScript This allows you to specify fields in a shorter form than do-
without having to think about the ultimate version of ing all the initialization. You’ve seen how you can directly
JavaScript that the user will use (such as modern browsers, expose variables as fields, but TypeScript also supports the
Node, etc.). accessor syntax (similar to class properties in other lan-
guages):
Classes introduce the concept of constructors. A constructor
is used to specify initialization parameters to a class. You class Person {
could add a constructor to the person class like so:
constructor(public firstName: string,
class Person { public lastName: string,
public age: number) {
firstName: string; }
lastName: string;
age: number; get yearsSinceAdult() {
return this.age - 18;
constructor(firstName: string, }
lastName: string,
age: number) { set yearsSinceAdult(value: number) {
this.firstName = firstName; this.age = value + 18;
this.lastName = lastName; }
this.age = age; }
}
let person = new Person("..,", "...", 53);
write() { const adulting = person.yearsSinceAdult; // 35
console.log(this.firstName + person.yearsSinceAdult = 25;
" " + const age = person.age; // 43
this.lastName);
} This allows you to specify access to field-like access while
} being able to intercept the getters and setters of a prop-
erty. TypeScript supports other modifiers (like protected,
A constructor is just a function that accepts parameters that private, readonly, etc.) but I won’t be covering them in this
can be used to instantiate your object. In this case, it also article.

codemag.com TypeScript: An Introduction 43


Inheritance in TypeScript Hopefully at this point, you’re seeing patterns of usage with
Classes support the notion of inheritance. But in JavaScript TypeScript that could be helpful in improving the code qual-
(and TypeScript by extension) this inheritance isn’t class- ity of your own projects.
based but object-based. This object-based inheritance is
why the language of TypeScript classes isn’t exactly like
other languages. For example, if you wanted to inherit from Configuring TypeScript Compilation
a base class, what you’re really doing is extending that type Without a configuration file for TypeScript, it uses conserva-
using the extends keyword: tive defaults. For example, you may see errors like this:

class Manager extends Person { tsc index.ts


reports: Person[]; index.ts:26:7 - error TS1056: Accessors are
} only available when targeting
ECMAScript 5 and higher.
You extend the Person type with a new type called Manager.
Unless you define your own constructor, you can just use 26 get yearsSinceAdult() {
the super-class’ constructor. In this case, the newly created ~~~~~~~~~~~~~~~
Manager object is both a Person and a Manager:
index.ts:30:7 - error TS1056: Accessors are
let mgr = new Manager("..,", "...", 53); only available when targeting
mgr.reports.push(person); ECMAScript 5 and higher.

let isPerson = mgr instanceof Person; // true 30 set yearsSinceAdult(value: number) {


let isManager = mgr instanceof Manager; //true ~~~~~~~~~~~~~~~

This allows you to subclass or specialize classes as neces-


sary. Sometimes you don’t want inheritance but instead, Found 2 errors in the same file,
you want to specify that an object adheres to a contract starting at: index.ts:26
(e.g., interfaces). This requires you to create an interface
type that has the required fields, methods, etc. Let’s create In this case, it shows an error in your code because it’s try-
an interface for the Person object: ing to compile down to the lowest common denominator
(e.g., EcmaScript 3). That leads to configuring TypeScript.
interface IPerson { You can create a default configuration file in typescript by
firstName: string; using the compiler:
lastName: string;
age: number; > tsc --init
}
This creates a file called tsconfig.json. The configuration
Instead of a class needing to extend a type, an interface that this option adds shows all the possible settings and
guarantees that the class has certain members. You can use can be a bit overwhelming. You can create your own as a
this by using the implements keyword: simple tsconfig.json file. I’ll start out with an empty object
with one property called include:
class Person implements IPerson {
{
If the class was missing any of the elements of the inter- "include": [ "index.ts" ]
face, you’d get a type checking error. You can implement }
more than one Interface but you can’t extend more than
one super class. So this is perfectly acceptable as long as You could also specify wildcards:
the object adheres to both contracts:
"include": [ "**/*.ts" ],
class Person implements IPerson, IWriteable {
Using wildcards can get you in trouble if you’re using NPM,
You might be wondering what the difference is between an so often you’ll want to specifically exclude the node_mod-
interface and type definitions. In fact, they’re very similar. ules directory:
You can use implements with type definitions:
{
interface IPerson { "include": [ "**/*.ts" ],
firstName: string; "exclude": [ "node_modules/"],
lastName: string; }
age: number;
} You’ll notice that if you run the compiler with no arguments,
the error is the same because you haven’t told TypeScript
type IWriteable = { how you want to compile the code. You do this with an ob-
write() : void; ject called compilerOptions:
}
{
class Person implements IPerson, IWriteable { "include": [ "**/*.ts" ],

44 TypeScript: An Introduction codemag.com


"exclude": [ "node_modules/"], Visual Studio, etc.) read these type libraries and use that
SPONSORED SIDEBAR:
"compilerOptions": { } information to help with IntelliSense or other hinting. The
} TypeScript compiler uses this type information to confirm Ready to Modernize
that your code is correct as well. For example, if you wanted a Legacy App?
Most of the configuration happens in this property. For ex- to parse a date with moment:
ample, let’s fix the error by telling TypeScript what version Need FREE advice on
of JavaScript to compile to. Let’s set it to EcmaScript 2015, let date = moment("2025-05-05"); migrating yesterday’s
which is good for most modern browsers: legacy applications
This looks like it’s usage from JavaScript, but the type infor- to today’s modern
{ mation is there (and inferred): platforms? Get answers
"include": ["**/*.ts"], by taking advantage
"exclude": ["node_modules/"], let date: moment.Moment = moment("2025-05-05"); of CODE Consulting’s
"compilerOptions": { years of experience by
This is what you’d use to be explicit about the type informa- contacting us today to
"target": "ES2015"
schedule your free hour
} tion. Why is this important? It means that by using Type-
of CODE Consulting call.
} Script even if you aren’t using the type checking, you will
No strings. No commitment.
gain benefits in development because of the type libraries. Nothing to buy.
Now if you execute the compiler (tsc), it compiles the code It’s a good use for TypeScript even if you aren’t sold on the For more information,
safely. Without making any changes, TypeScript is some- static analysis. visit www.codemag.com/
what permissive (e.g., making any un-type definition an consulting or email us at
any type), but often, for large codebases, you’ll also turn
on strictness to force typing: Where Are We? info@codemag.com.

TypeScript is an important language that allows you to im-


{ prove your code quality in large projects. There are benefits
"include": ["**/*.ts"], beyond just being more stringent in your own coding. You
"exclude": ["node_modules/"], can gain benefits even if you don’t end up diving into any
"compilerOptions": { the advanced features of TypeScript. In addition, you’ll see
"target": "ES2015", its use in many of the frameworks you’re using or that you
"strict": true will eventually use. I hope I’ve encouraged you to take a
} look and see how it fits into your own projects.
}
 Shawn Wildermuth
This should give you a tiny taste of the configuration. There 
are a lot of options that may be required depending on your
environment, but this should get you started.

Type Libraries
One of the benefits of TypeScript that may not be obvious at
first is to consume something called Type Libraries. When
you write TypeScript, it’s checking for self-confirming type
information. In other words, your classes, functions, and
types will be checked within your project. What about other
frameworks or libraries that you’re using? If you’re working
within an NPM-enabled project, you might have some librar-
ies. For example, the moment* library could be added to
your project like so:

> npm install moment

I’m using moment as an example,


although its use isn’t necessary any
longer in most JavaScript projects.

When you use the library, how does TypeScript help there?
Many of these libraries have added type information in
the form of Type Libraries (typically {projectName}.g.ts).
In the moment project, it contains a moment.g.ts. This
means that when you use moment in your own project, it
knows about the interface and types from this library (even
though it’s JavaScript). Most editors (Visual Studio Code,

codemag.com TypeScript: An Introduction 45


ONLINE QUICK ID 2207061

Developing Dashboards Using Grafana


In my earlier article on Building Dashboards Using Bokeh (https://www.codemag.com/Article/2111061/Building-Dashboards-
Using-Bokeh), I talked about how you can build dashboards programmatically using the Bokeh Python library. What if you want
to create dashboards but don’t want to spend too much time coding them? The good news is that you can do that using a

tool called Grafana. In this article, I’ll walk you through how There, you’ll find the instructions to download Grafana for
to get started with Grafana, and how you can use it to build the OS that you’re running.
interesting dashboards. I’ll also describe two projects that
you can build using Grafana: For Windows, you can download the Windows installer
(https://dl.grafana.com/enterprise/release/grafana-enter-
• How to create dynamic auto-updated time series prise-8.5.2.windows-amd64.msi) and then run it.
charts
• How to display real-time sensor data using MQTT For Mac, you can download and install Grafana directly us-
ing Terminal:
Wei-Meng Lee
weimenglee@learn2develop.net
Installing Grafana $ curl -O
http://www.learn2develop.net The first good news about Grafana is that it’s free to use for https://dl.grafana.com/enterprise/release/grafana-
@weimenglee most use cases, except where you need enterprise features enterprise-8.5.2.darwin-amd64.tar.gz
such as advanced security, reporting, etc. If you want to
Wei-Meng Lee is a technolo-
gist and founder of Devel- modify the source code of Grafana, be sure to check the $ tar -zxvf grafana-enterprise-8.5.2.darwin-
oper Learning Solutions licensing info at https://grafana.com/licensing/. amd64.tar.gz
(www.learn2develop.net),
a technology company spe- To download the free enterprise version of Grafana, go to Once the above steps are done, you should now find a folder
cializing in hands-on train- https://grafana.com/grafana/download (see Figure 1). named Grafana-8.5.2 in your home directory. On Windows,
ing on the latest technolo-
gies. Wei-Meng has many
years of training experiences
and his training courses
place special emphasis
on the learning-by-doing
approach. His hands-on
approach to learning
programming makes
understanding the subject
much easier than read-
ing books, tutorials, and
documentation. His name
regularly appears in online
and print publications such
as DevX.com, MobiForge.
com, and CODE Magazine.

Figure 1: Instructions for downloading Grafana for the various OSes

46 Developing Dashboards Using Grafana codemag.com


Grafana will be installed in the following directory: C:\Pro- Creating Data Sources
gram Files\GrafanaLabs\grafana\. Grafana can load data from a variety of data sources, such
as Prometheus, MySQL, InfluxDB, PostgreSQL, Microsoft
Starting the Grafana Service SQL Server, Oracle, MongoDB, and more. If the data source
On the Mac, you need to manually start the Grafana service that you need isn’t available, you can locate additional data
by using the following commands in Terminal: sources via plug-ins (Settings > Plugins). A good example
of a data source that a lot of data analyst use is CSV, which is
$ cd ~ supported in Grafana through the CSV data source plug-in.
$ cd grafana-8.5.2
$ ./bin/grafana-server web

On Windows, Grafana is installed as a Windows service and


is automatically started after you have installed it.

Logging in to Grafana
Once Grafana has been installed on your computer, open it
using a Web browser using the following URL: http://local-
host:3000/login (see Figure 2).

The default username is admin and password is also ad-


min. Once you’ve logged in, you’ll be asked to change your
default password. After that, you’ll see the page shown in
Figure 3.

If you forget your password,


you can reset the admin password
using the following command:
./bin/grafana-cli admin reset-admin-
password password
Figure 2: Logging in to Grafana

Figure 3: The home page of Grafana

codemag.com Developing Dashboards Using Grafana 47


Although you can use CSV as a data source in Grafana, que-
rying your data (and manipulating it) is much easier if you
use a database data source, such as MySQL. For this arti-
cle, I’ll show you how to import your CSV file into a MySQL
database server and then use MySQL as the data source in
Grafana.

Preparing the MySQL databases


For this article, I’m going to assume that you already have
MySQL and MySQL Workbench installed. In the MySQL Work-
bench, log in to your local database server. Create a new
query and enter the following SQL statements to create a
Figure 4: Import data into the table. new account named user1 with password as the password:

Figure 5: Viewing the imported Insurance dataset

Figure 6: Configuring a new


data source Figure 7: Add a new data source.

48 Developing Dashboards Using Grafana codemag.com


Figure 8: Locate the MySQL data source.

CREATE USER 'user1'@'localhost' IDENTIFIED BY


'password';
GRANT ALL ON *.* TO 'user1'@'localhost'

Next, create a new databased named Insurance:

CREATE DATABASE Insurance;


USE Insurance;

Once the Insurance database is created on MySQL, right-


click on Tables and select Table Data Import Wizard (see
Figure 4):

You’ll be loading a CSV file into the Insurance database. For


this example, I’ll use the insurance dataset from: https://www.
kaggle.com/datasets/teertha/ushealthinsurancedataset.

In the Table Data Import wizard that appears, enter the path
of the CSV file that you’ve downloaded and follow the steps.
Specify that the data of the CSV file be loaded onto the In-
surance database and you also have the option to name the
table. If you take the default options, the content of the
CSV file is loaded onto a table named Insurance. You should
now be able to see the imported table and its content (see
Figure 5). Figure 9: Configure the MySQL data source to connect to the Insurance database.

Using the MySQL Data Source


With the database prepared, it’s now time to configure the Creating a Dashboard
MySQL data source in Grafana. In Grafana, click Configura- Now that the data source configuration is done, it’s time
tion > Data sources (see Figure 6). to get into the meat of what we want to do! In Grafana,
select + > Dashboard to create a new dashboard (see
Click Add data source (see Figure 7). Figure 10).

Type in MySQL and then click on the MySQL data source that
appears (see Figure 8).
In Grafana, a dashboard is a set of
Enter the details of the MySQL connection as shown in Figure 9.
one or more panels organized and
On the bottom of the page, click Save & test to ensure that the arranged into one or more rows.
connection to the MySQL database server is working correctly. Figure 10: Add a new dashboard.

codemag.com Developing Dashboards Using Grafana 49


Click on the Save icon and give a name to your dashboard
(see Figure 11). Click Save.

Creating Panels
With the dashboard created, you’re now ready to add a pan-
el to your dashboard.

The panel is the basic visualization


building block in Grafana.
A panel can display a bar chart,
pie chart, time series, map,
histogram, and more.
Figure 11: Name and save the dashboard.

Click the Add Panel icon and then the Add a new panel
button to add a panel to your dashboard (see Figure 12).

You should see the default panel, as shown in Figure 13.

Let’s try to create a pie chart. Enter the details shown in


Figure 14.

In the above, you set:

• Visualization type to Pie chart


• Data source to MySQL
• Manually set the SQL query to:

SELECT
now() as time,
count(region) AS value,
Figure 12: Add a new panel to the dashboard. region as metric

Figure 13: The default panel added to the dashboard

50 Developing Dashboards Using Grafana codemag.com


Figure 14: Display a pie chart in the current panel.

FROM Insurance
GROUP BY region

• Format to Time Series

In addition to the pie chart, you can also display bar chart
(see Figure 15).

You can also display the data as Stat (statistics; see Figure 16).

If you want to display a histogram showing the distribution


of the various age range, you can set the panel as shown in
Figure 17.

When you’re done with the panel, clicking on the Apply but-
ton at the top of the page returns you to the dashboard.
Your dashboard now has one panel (see Figure 18).

From this point, you can add additional panels to your dash-
board.

Time series visualization is the default


and primary way to visualize data
in Grafana. Your SQL query’s result
should have a field named Time.
Figure 15: Displaying a bar chart in the current panel

codemag.com Developing Dashboards Using Grafana 51


Exporting and Importing Dashboards icon and then click the Export tab (see Figure 19). Then,
Once you’ve created your dashboard, you can export it so click the Save to file button.
that you can back them up or load them into Grafana on
another computer. To export a dashboard, click the Share A JSON file will now be downloaded to your computer. The
JSON file now contains all the details of your dashboards
and can be sent to another computer. To import the export-
ed dashboard, select + > Import (see Figure 20) and load
the JSON file that you saved earlier. The dashboard will now
be imported into Grafana.

Creating Dynamic Auto-Updated


Time Series Charts
So far in this article, you’ve seen how to load data from
a MySQL database using the MySQL Data Source. However,
in real-life, a lot of times the data is from Web APIs, and
so it makes sense to be able to use Web APIs in Grafana.
Although this looks like a straight-forward affair, it’s more
involved than you would imagine. For this section, you’ll:

• Build a REST API back-end containing stock prices


so that it can be used by Grafana’s SimpleJson data
source.
• Build the front-end using Grafana. The chart that
you’ll build shows the stock price of either AAPL or
GOOG (fetched from the back-end REST API) and will
be automatically updated every five seconds.

Creating the Sample Data


First, let’s start off with the server side. For this project, you’ll
build a REST API that allows a client to retrieve the stock
prices of either AAPL or GOOG based on a specified range of
dates. For the stock prices, I’m going to simulate the prices by
Figure 16: Displaying the result of the query as statistics randomly generating them and then saving them in CSV files.

Figure 17: Displaying a histogram showing the age distribution

52 Developing Dashboards Using Grafana codemag.com


Figure 18: The dashboard with one panel

Figure 19: Exporting a dashboard

tz = timezone.utc)
In the real-world, your stock
df = pd.DataFrame(
data will more likely come
{
from databases that 'value':np.random.uniform(
are updated dynamically. 2500,3200,len(days))
}, index = days)

display(df)
For this article, I’ll pre-generate the prices for four days, df.to_csv('stocks_goog.csv')
from one day before the current day, to three days after
the current day. First, generate the simulated stock prices The simulated stock price is saved as stocks_goog.csv. Next,
for GOOG: generate the prices for AAPL:

from datetime import timedelta, timezone from datetime import timedelta, timezone
import datetime import datetime
import numpy as np import numpy as np
import pandas as pddate_today = \ import pandas as pddate_today = \
datetime.datetime.now() datetime.datetime.now()

days = pd.date_range(date_today - timedelta(1), days = pd.date_range(date_today - timedelta(1),


date_today + timedelta(3), freq = '5s', date_today + timedelta(3), freq = '5s',

codemag.com Developing Dashboards Using Grafana 53


Listing 1. The REST API for stock prices
import numpy as np (df.index < ts_range['$lte'])]
import pandas as pd
from grafana_pandas_datasource import create_app # Register data generators
from grafana_pandas_datasource.registry import data_generators dg.add_metric_reader("symbol", get_stock)
as dg
from grafana_pandas_datasource.service import pandas_component def main():
# Define and register data generators.
from datetime import datetime, timedelta define_and_register_data()
from pytz import timezone
# Create Flask application.
def define_and_register_data(): app = create_app()
def get_stock(stock_symbol, ts_range):
if stock_symbol == 'AAPL': # Register pandas component.
df = pd.read_csv('stocks_aapl.csv', app.register_blueprint(pandas_component, url_prefix="/")
parse_dates=True, index_col=0)
if stock_symbol == 'GOOG': # Invoke Flask application.
df = pd.read_csv('stocks_goog.csv', app.run(host="127.0.0.1", port=3003, debug=True)
parse_dates=True, index_col=0)
if __name__ == "__main__":
# return the rows that falls within the specified dates main()
return df[(df.index > ts_range['$gt']) &

tz = timezone.utc) 2022-03-16 11:20:26.209523+00:00,169.90405158220707


2022-03-16 11:20:31.209523+00:00,189.30501099950166
df = pd.DataFrame( ...
{
'value':np.random.uniform( Creating the REST API
150,190,len(days)) Let’s now focus attention on creating the REST API, which is
}, index = days) the more challenging aspect of this project. As mentioned
earlier, you can use the SimpleJson data source on Grafana
display(df) to connect to a REST API. However, it requires the REST
df.to_csv('stocks_aapl.csv') API to implement specific URLs (see https://grafana.com/
grafana/plugins/grafana-simple-json-datasource/ for more
The simulated stock price is saved as stocks_aapl.csv. Here details). This means that your REST API must be written spe-
is a sample output for the stocks_aapl.csv file: cially for the SimpleJson data source.

,value To make my life simpler, I decided to use the Grafana pan-


2022-03-16 11:19:56.209523+00:00,184.55338767944096 das datasource module (https://github.com/panodata/
2022-03-16 11:20:01.209523+00:00,168.76885410294773 grafana-pandas-datasource) to create my REST API. This
2022-03-16 11:20:06.209523+00:00,188.02816186918278 module runs an HTTP API based on Flask, and it returns Pan-
2022-03-16 11:20:11.209523+00:00,164.63482117646518 da’s dataframes to Grafana, which is what SimpleJson data
Figure 20: Importing a 2022-03-16 11:20:16.209523+00:00,161.33806737466773 source can work with. This module also provides samples
dashboard 2022-03-16 11:20:21.209523+00:00,169.10779687119663 where you can see how you can implement your own REST
API. I’ve adapted one of the provided samples (sinewave-
midnights) and modified it for my purposes.

Create a new text file and name it demo.py. Populate it with


the following statements, shown in Listing 1.

Observe the following:

• The define_and_register_data() function contains


another function called get_stock().
• The add_metric_reader() function links the query
sent by Grafana with the function that handles the
query. In this case, it will link the “symbol” query with
the get_stock() function.
• The query sent by Grafana includes a value. For ex-
ample, you can configure Grafana to send a query
“symbol:AAPL”. The value of “AAPL” will be passed to
the first parameter of the get_stock() function (see
Figure 21).

• The second parameter of the get_stock() function—a ts_


range—gets its value from Grafana. You’ll see this later.
• In the get_stock() function, you load the required
Figure 21: Understanding how the REST API works with Grafana CSV file and then perform a filter to only return the

54 Developing Dashboards Using Grafana codemag.com


rows that match the time range (ts_range) passed in
by Grafana. In real-life, you should load the data from
a database server.
• The REST API listens at port 3003 on the local com-
puter.

Before you can run the REST API, you need to install
the grafana-pandas-datasource module:

$ pip install grafana-pandas-datasource

You now can run the REST API:

$ python demo.py

You should see the following:

* Serving Flask app "grafana_pandas_datasource"


(lazy loading)
* Environment: production Figure 22: Add the SimpleJson data source to the project.
WARNING: This is a development server. Do not use
it in a production deployment.Use a production
WSGI server instead.
* Debug mode: on
2022-03-17 13:37:38,454.454 [werkzeug ]
INFO : * Running on http://127.0.0.1:3003/
(Press CTRL+C to quit)
2022-03-17 13:37:38,455.455 [werkzeug ]
INFO : * Restarting with stat
2022-03-17 13:37:38,895.895 [werkzeug ]
WARNING: * Debugger is active!
2022-03-17 13:37:38,902.902 [werkzeug ]
INFO : * Debugger PIN: 301-215-354

Testing the REST API


You can now test the REST API. You can issue the following
command using the curl utility:

$ curl -H “Content-Type: application/json” -X POST


http://127.0.0.1:3003/query -d ‘{“targets”:
[{“target”: “symbol:GOOG”}], “range”: {“from”:
“2022–03–17 03:00:49.110000+00:00”,
“to”: “2022–03–17 03:05:49.110000+00:00”}}’

The result looks like this:

[
{
"datapoints": [
[
3169.441653300435,
1647486052594
],
[
2748.501265212758,
1647486057594 Figure 23: Configure the SimpleJson data source.
],
[
3195.3559754568632, [
1647486062594 3098.521949302881,
], 1647486347594
... ]
[ ],
2744.0582066482057, "target": "value"
1647486342594 }
], ]

codemag.com Developing Dashboards Using Grafana 55


Now that the REST API is up and running, the next thing is For the URL, enter http://localhost:3003 and click Save &
to work on the Grafana side. test (see Figure 23).

Adding the SimpleJson Data Source Creating the Dashboard


The next step is to add and configure the SimpleJSON data Create a new dashboard in Grafana. Then add a new Panel by
source in Grafana. Using the SimpleJSON data source, you can clicking Add a new panel (see Figure 24).
connect to REST APIs and download the data in JSON format.
Using the default Time series visualization, config-
In Grafana, first add a new Data Source (Configuration > ure the Data source as shown in Figure 25. Also, en-
Data sources). Click on the Add data source button. Search ter symbol:AAPL for the timeserie query. You should now
for SimpleJson and double click on it (see Figure 22). see the chart plotted.

Earlier I mentioned that Grafana will send a query to the


REST API. This is the query: symbol:AAPL.

You can control how often you want Grafana to fetch the
data for you by selecting the date range (see Figure 26).

For example, if the current date and time is 17th March


2022, 5.53am (UTC-time), Grafana will send the following
time range to the back-end REST API if you select the Last
5 minutes option:

{
'$gt': datetime.datetime(2022, 3, 17, 5, 48,
18, 555000,
tzinfo=<UTC>),
'$lte': datetime.datetime(2022, 3, 17, 5, 53,
18, 555000,
tzinfo=<UTC>)
}

The chart will be updated with the data for the last five min-
Figure 24: Add a new panel to the dashboard. utes (see Figure 27).

Figure 25: Configure the SimpleJson data source in the Panel.

56 Developing Dashboards Using Grafana codemag.com


Click the Save button at the top right corner to name and
save the dashboard (see Figure 28).

You’ll now be returned to the dashboard.

Configuring a Variable
You can configure a variable in Grafana for your dashboard to
allow users to display the chart for different stock symbols.

Click on the Settings button for your current dashboard (see


Figure 29).

Click the Variables section on the left and then click Add
variable (see Figure 30).

Enter the following information for the variable and then


click Update (see Figure 31). Figure 26: Select how often you want Grafana to fetch your data for you.

Figure 27: Display the data fetched from the REST API.

Back in the dashboard, select Edit for the Panel (see Figure 32).

Set the title of the Panel as Company: $stock and update


the query to symbol:$stock (see Figure 33). Prefixing the
variable name with the $ character allows you to retrieve
the value of the variable and use it in your query and panel.

Click Apply. In the textbox next to the stock variable, enter AAPL.
You should now see the chart for AAPL (see Figure 34).

In the textbox next to the stock variable, enter GOOG. You


should now see the chart for GOOG (see Figure 35).

Auto-Updating the Chart


To automatically update the chart, select the drop-down list
next to the Refresh button and select how often you want
to refresh the chart (see Figure 36). Figure 28: Save the dashboard.

codemag.com Developing Dashboards Using Grafana 57


If you select 5s, the chart will now be updated every five
seconds.

Displaying Real-Time Sensor Data


Using MQTT
In Grafana 8.0, as part of the Grafana Live feature, it’s now
possible to perform real-time data updates using a new
streaming API. This means that it’s now possible for you to
create charts that update in real-time and on-demand.

To make use of this feature, you can make use of


the MQTT data source (https://github.com/svet-b/grafana-
mqtt-datasource), a plug-in that allows Grafana users to
visualize MQTT data in real-time. In this section, you’ll learn
how to use the MQTT data source to display sensor data in
real-time.

Using MQTT is ideal for projects such as those involving the


Figure 29: Configure the dashboard Raspberry Pi (see Figure 37).

Installing the MQTT Data Source into Grafana


Although there is the MQTT data source built for Grafana, it
doesn’t come bundled with Grafana—you need to build and
install it yourself. This is where the complexity comes in.
In this section, I’ll show you how to install the MQTT Data
Source on Windows. You’ll need a few tools/language:

• Node.js
• yarn
• Go
• Mage

Don’t worry if you’re not familiar with these tools and language.
I’ll show you how to install them in the following sections.

Install Node.js
Download and install Node.js from https://nodejs.org/
en/download/. Once Node.js is installed on your system,
type the following command in Command Prompt to in-
stall yarn:

Figure 30: Add a variable to the current dashboard. npm install —g yarn

Figure 31: Configure the variable to be added to the dashboard. Figure 32: Editing the panel

58 Developing Dashboards Using Grafana codemag.com


Install Go
Download and install Go from https://go.dev/dl/. Follow
the instructions in the installer.

Download Mage
Go to the source of Mage at https://github.com/magefile/
mage. Click on the Code button and select Download ZIP
(see Figure 38).

Once downloaded, extract the content of the zip file onto


the Desktop. Next, find out the location of your GOPATH by
typing this command in Command Prompt:

go env

You can locate the path for GOPATH from the output:

...
set GOOS=windows
set GOPATH=C:\Users\Wei-Meng Lee\go Figure 33: Using the newly added variable in the panel
set GOPRIVATE=
...

Type the following command to create a path call Go (if this


path does not already exist on your computer; remember to
use your own path):

cd c:\users\Wei Meng Lee\


mkdir Go

In the Command Prompt, change the directory to the mage-


master folder:

cd C:\Users\Wei-Meng Lee\Desktop\mage-master

Type the following command:

go run bootstrap.go

You should see something like this:

Running target: Install


exec: go “env” “GOBIN” Figure 34: Viewing the chart for AAPL
exec: go “env” “GOPATH”
exec: git “rev-parse” “ — short” “HEAD”
exec: git “describe” “ — tags”
exec: go “build” “-o” “C:\\Users\\Wei-Meng
Lee\\go\\bin\\mage.exe” “-ldflags=-X
\”github.com/magefile/mage/mage.timestamp=2022–03–
18T12:44:11+08:00\” -X
\”github.com/magefile/mage/mage.commitHash=\” -X
\”github.com/magefile/mage/mage.gitTag=dev\””
“github.com/magefile/mage”

Download the Source for the MQTT Data Source


The next step is to download the source code of the
MQTT data source. Go to https://github.com/grafa-
na/mqtt-datasource and download the ZIP file (see
Figure 39).

Once the file is downloaded, extract the content of the


zip file to the Desktop. Edit the file named package.go in
the mqtt-datasource-main folder using a code editor. Re-
place “rm -rf dist && ...” with “del /F /Q dist && ...” (the
command you replaced is to ensure that it works on Win-
dows): Figure 35: Viewing the chart for GOOG

codemag.com Developing Dashboards Using Grafana 59


Figure 37: A Raspberry Pi

{
“name”: “grafana-mqtt-datasource”,
Figure 36: Select how often you want the chart to refresh. “version”: “0.0.1-dev”,
“description”: “MQTT Datasource Plugin”,
“scripts”: { “build”: “del /F /Q dist && grafana-
toolkit plugin:build && mage build:backend”,
...

In the Command Prompt, cd to the mqtt-datasource-


main folder:

cd C:\Users\Wei-Meng Lee\Desktop\mqtt-datasource-main

And type in the following commands:

yarn build
yarn install

Configuring the Plugin


Move the mqtt-datasource-main folder into the C:\Pro-
gram Files\GrafanaLabs\grafana\data\plugins folder.

Next, load the defaults.ini file in the C:\Program Files\


GrafanaLabs\grafana\conf folder and add in the following
statement in bold:

[plugins]
Figure 38: Download the source code for Mage. enable_alpha = false
app_tls_skip_verify_insecure = false
# Enter a comma-separated list of plugin identifiers
to identify plugins to load even if they are
unsigned. Plugins with modified signatures are never
loaded.

allow_loading_unsigned_plugins = grafana-mqtt-
datasource

# Enable or disable installing / uninstalling /


updating plugins directly from within Grafana.
plugin_admin_enabled = true

The above addition indicates to Grafana to allow unsigned


plug-ins (which, in this case, is the MQTT Data Source). Re-
start Grafana in Windows Services. If you have performed
the above steps in building the MQTT Data Source for Grafa-
na, you should now be ready to use it.

Publishing Data to a MQTT Broker


Before you start to build a Panel to display data using the
Figure 39: Download the source code of the MQTT Data Source. MQTT Data Source, you need to write data to a MQTT broker

60 Developing Dashboards Using Grafana codemag.com


so that you can subscribe to it. For this, I’ll write a Python
program to simulate some sensor data.

Create a text file and name it as publish.py. Populate it with


the following statements:

# pip install paho-mqtt


import paho.mqtt.client as mqtt
import numpy as np
import time

MQTTBROKER = 'test.mosquitto.org'
PORT = 1883
TOPIC = "home/temp/room1/storeroom"

mqttc = mqtt.Client("python_pub") Figure 40: Add the MQTT data source to Grafana.
mqttc.connect(MQTTBROKER, PORT)

while True:
MESSAGE = str(np.random.uniform(20,30))
mqttc.publish(TOPIC, MESSAGE)
print("Published to " + MQTTBROKER + ': ' +
TOPIC + ':' + MESSAGE)
time.sleep(3)

The above Python program sends some data to the pub-


lic MQTT broker every three seconds. To run the pub-
lish.py file, type the following commands in Command
Prompt:

pip install paho-mqtt


python publish.py

Subscribe Using the MQTT Data Source


In Grafana, add a new data source by searching for mqtt
(see Figure 40).

Configure the MQTT data source as follows (see also


Figure 41):

• Host: test.mosquitto.org
• Port: 1883

You can leave the fields under the Authentication section


empty. Click Save & test once you’re done. Figure 41: Configure the MQTT data source.

Figure 42: Add a new panel to the dashboard.

codemag.com Developing Dashboards Using Grafana 61


Figure 43: Configuring the panel

SPONSORED SIDEBAR:
Need FREE
Project Advice?
CODE Can Help!
No strings free advice on
a new or existing software
development projects.
CODE Consulting experts
have experience in cloud,
Web, desktop, mobile,
microservices, containers,
and DevOps projects.
Schedule your free hour
of CODE call with our
expert consultants today.
For more information
visit www.codemag.com/
consulting or email us at
info@codemag.com.
Figure 44: Display the data from MQTT as a line chart.

Next, create a new Dashboard and add a new Panel (see Summary
Figure 42): In this article, you’ve seen how to use Grafana to build
dashboards with a minimal amount of code needed. Instead
Configure the panel with the following (see also Figure 43): of code, all you need is some basic SQL skills (of course,
there will be cases where you need to write complex SQL
• Visualization: Time Series queries to extract your data). In addition, I’ve walked you
• Data source: MQTT through two projects where you learned how to dynamically
• Topic: home/temp/room1/storeroom update your chart from a REST API, as well as fetch data
from a MQTT data source. Have fun!
Click Apply when done.
 Wei-Meng Lee
You should now see the chart plotted and updated every 
three seconds (see Figure 44). If you can see the chart,
this means that you’re able to receive the data through the
MQTT data source.

62 Developing Dashboards Using Grafana codemag.com


Advertorial

codemag.com Title article 63


ONLINE QUICK ID 2207071

The Excellent Schemer


AccelerateTM for Microsoft 365 is a new, commercially available Microsoft Office add-in that deeply integrates the Visual Scheme
for ApplicationsTM (VSATM) programming language into the popular back-office automation suite, for versions 2016 and later.
VSA intends to serve as the “third musketeer,” alongside Visual Basic for Applications (VBA) and the popular formula expression

language of Excel, but with a twist: Under the hood of this guage in which LET() and LAMBDA() are available, VSA code
uniquely powerful language is the full reach and power of can interact with the outside world from both directions:
the .NET framework. the inside out and the outside in. A standalone scripting en-
gine at the command line for general-purpose programming
VSA is implemented on top of .NET and woven directly into accompanies the add-in and comes with a plethora of useful
the fabric of the storied Office suite via the Accelerate for Scheme and .NET wrapper libraries for this very purpose.
Microsoft 365 add-in. Initial focus is on Excel, but eventu-
ally all the applications that are part of the suite will benefit In this article, I’ll walk you through the rationale for mak-
from its core value proposition. The add-in supports in-ap- ing Scheme on .NET a first-class extension/automation lan-
Bob Calco plication scripting in VSA via a proper REPL (see Figure 1), guage of the Office suite by way of a hands-on primer. First,
Bob.Calco@apexdatasolutions.com available from within the host Office application (and acces- I’ll cover some basic facts about Scheme as I ease into re-
@BobCalco sible at the command line as well for external processing). viewing VSA as a first-class language for user-defined func-
tion (UDF) development in Excel. Then, I’ll explore some of
Bob is a seasoned enter- the more powerful techniques around wrapping .NET librar-
In the case of Excel, an additional pretty-printing, syntax-
prise architect and a proven
highlighting Lambda Editor (see Figure 2) makes writing ies for use in VSA-powered Office solutions. Finally, I’ll take
software innovator with
user-defined functions (UDFs) in Scheme both easy and a step back and cover syntax extensions, perhaps the most
over 20 years of experience
designing, building, and convenient. mystifying feature of the language, and how to interact with
leading teams to deliver them in the built-in and command-line REPL.
large-scale, enterprise In this latter capacity within Excel, Visual Scheme for Appli-
cations compliments Microsoft’s recent addition of the LET()
solutions in the healthcare,
financial services, and retail and LAMBDA() functions to their formula expression lan- Why Scheme?
sectors. For the past five guage by providing a flavor of the same functional program- At first glance, Scheme might seem an odd choice of a lan-
years, Bob has focused ming approach to creating UDFs, but with an external enter- guage to tag team with VBA and Excel’s formula expression
on data sharing and prise integration focus. Unlike the formula expression lan- language. On closer inspection, and especially considering
interoperability both
within and between large,
distributed enterprise sys-
tems. Bob was the primary
visionary and architect for
the VistA.js Platform within
the VA, a Class 1 enterprise
microservices framework
that enabled data fed-
eration and reconciliation
across the VA’s 154 medical
centers. Currently, he leads
the Apex Team to address
current industry challenges
associated with Provider
Data Management/Provider
Directories.

Bob is a certified expert


in multiple programming
languages, having worked
productively in over a dozen
programming environments
and platforms as diverse
as C++, Delphi, Ada, Java,
.NET (C#, VB), JavaScript,
LISP, Clojure, Scala, Small-
talk, Ruby, and others.

Figure 1: The REPL is where you test out ideas before you commit them to code.

64 The Excellent Schemer codemag.com


The Downlow on
the Download
The download includes the
two spreadsheets discussed
in the article and the
blockchain.sls library from
Listing 5 (a single file you
can add to your Accelerate
Working Directory).
The examples require
Accelerate for Microsoft
Figure 2: The Lambda Editor makes writing and using Scheme as a language for UDF development a pleasant breeze. 365 to run.

Microsoft's new, explicitly “functional programming” di- Many top-flight universities teach Scheme to first-year
rection with Excel’s formula expression language, Scheme computer science majors with no previous background in
seems the inevitable choice. programming, because anyone can grasp its basic syntax
in just a few hours. It’s very easy to learn (and to teach),
with an approachable and consistent core syntax based on
Lisp S-expressions that’s both very succinct and surprisingly
The choice of providing a Lisp malleable. Mastering Scheme is a bit like mastering chess,
language is a conscious decision with the exception that even a complete beginner can do
amazing things in Scheme right away.
to put the programmer in control
of the language’s features and The Scheme that Accelerate for Microsoft 365 integrates
into Office goes by the name “Visual Scheme for Applica-
functions for a given domain. tions,” or VSA, making its mission clear: to be the perfect
companion to VBA, by adding metaprogramming power and
.NET reach to the Office/VBA solution developer’s arsenal
Scheme is a seasoned dialect in the venerable Lisp family of tools.
of computer programming languages. Quoting Guy Steele,
one of Scheme’s inventors, “If you give someone FORTRAN, It’s worth noting briefly that Scheme is used elsewhere
he has FORTRAN. If you give someone Lisp, he has any in the real world. For example, the Scheme programming
language he pleases” (Friedman, D. The Reasoned Schemer, language is the GNU project’s official application extension
Second Edition, MIT Press, 1998). The choice of providing a language, where it goes by the name of Guile (see: http://
Lisp language is a conscious decision to put the program- gnu.org/software/guile). Another implementation of
mer in control of the language’s features and functions Scheme, called MIT Scheme, is the programming language
for a given domain. The bottom-up style of programming of choice for teaching, using several advanced textbooks on
it encourages results in every Lisp application becoming a diverse engineering topics, including classical mechanics
domain-specific language (DSL). This is because of a fea- (Structure and Interpretation of Classical Mechanics, by G.J.
ture that all Lisps have, called macros (although Scheme’s Sussman and J. Wisdom), differential geometry (Functional
flavor of this feature is unique to it, and referred to as Differential Geometry, by G.J. Sussman and J. Wisdom), and
“syntax extension” instead). I’ll show an example of this software engineering (Software Design for Flexibility—How to
below. Avoid Programming Yourself into a Corner, by C. Hanson, and

codemag.com The Excellent Schemer 65


G.J. Sussman). In other words, Scheme is more at home in To make it a little bit clearer how a Schemer would almost
application extensions, math-intensive, and data-intensive automatically see the original LAMBDA/LET function by for-
problem domains than you may realize. Sounds perfect for matting it a more semantically revealing way, consider:
Office—especially Excel!
=LAMBDA(X, Y,
Yet another popular embodiment of Scheme goes by the name LET(XS, X*X,
The Many Faces and of Racket (see https://www.racket-lang.org). Racket aspires YS, Y*Y,
Phases of Scheme to be an industrial-strength Scheme whose main mission is SQRT(XS+YS)))
“language-oriented programming,” proving that Scheme is
Scheme as a formal language indeed a uniquely powerful language for creating DSLs. That looks a lot like Scheme to me! The only real difference
has seen several major is the infix notation and the semantics of the LET expres-
revisions over its lifetime. Even All of these facts make Scheme, at the very least, an in- sion, which one simply must know to understand what’s go-
today it continues to evolve, at triguing choice for Office solutions developers looking for ing on. The evaluation order is not particularly self-evident
a rather slower pace relative to ways to modernize their clients’ back-office systems in an unless, of course, you happen to know how the let binding
the expectations of developers interconnected, multi-cloud world. form in every Lisp works.
in more modern languages.
However, the stability of the So, let’s see it in action, shall we? The beauty of the Scheme code is that syntactically there’s
tiny core of the language a lot less going on, and no foreknowledge is required to un-
is Scheme’s greatest asset derstand it. The Scheme prefix notation makes commas un-
from both a learnability and Mary Had a Little (lambda () …) necessary and there’s zero ambiguity about what each set of
a portability point of view.
One thing that surprises developers who haven’t yet had a parentheses do in context. The values are evaluated accord-
While Scheme’s portability has
chance to explore a proper Lisp is how similar the new LET/ ing to the natural structure of the nested S-expressions. A
proven elusive due to reasons
relating primarily to its inherent
LAMBDA functions in Excel look, or at least feel, like Lisp/ Schemer understands that the parentheses, far from being
malleability and primary Scheme. Here is an example of an Excel LAMBDA/LET function: an annoyance, serve this disambiguation function and make
use case as an application so much syntax seem superfluous when it's related to order
extension language, it’s always =LAMBDA(X,Y,LET(XS,X*X,YS,Y*Y,SQRT(XS+YS))) precedence and other grammar rules that most languages
been easy to learn and extend. need to define and enforce somehow.
Now here’s the same thing in Scheme:
The current standard for In the LAMBDA function, by contrast, it’s necessary to sim-
Scheme is referred to as (lambda (x y) ply know that the LET function MUST have an odd number of
R6RS, it being the sixth major (let ((xs (* x x)) parameters, the last of which must evaluate to a value and
revision of the language. See (ys (* y y))) the arguments you provide must alternate between a name
http://www.r6rs.org/ for the (sqrt (+ xs ys)))) and a value binding.
full rationale and specification.
R7RS has been in the works for
many years and is presently
split up between “small” and
“large” specs as the language
committee grapples with
resolving the best of previous
versions with what hopefully
will be the awesomeness of
the new specification. In the
meantime, actual Scheme
implementations tend to
support multiple specifications,
for mainly practical reasons
given the glacial pace of
change. IronScheme, on which
VSA is built, only supports the
R6RS specification.

Another perhaps more real-


time driver of the evolution
of Scheme are what are
known as “Scheme Requests
for Implementation” or SRFIs
(see https://srfi.schemers.org/).
VSA supports many of these
requests, which are numbered.
You can see them in the lib\
srfi\ subdirectory of your
installation folder. If you’re
feeling particularly industrious,
you can get involved, try
implementing some SRFIs that
aren’t yet included, or perhaps Figure 3: Building sentences from words in 2D regions requires carefully choosing the orientation according to which the
author your very own SRFI! words will be flattened into a single array.

66 The Excellent Schemer codemag.com


Now let’s take a quick look at some of the more meat-and- list->vector Scheme function on “words,” which does just
potatoes uses of lambda expressions in Accelerate for Mi- what it says: It converts a list to a vector. Vectors in Scheme
crosoft 365. are the same as arrays in .NET.

Figure 3 is a screenshot of an example spreadsheet provid- Listing 2 (shown in the Lambda Editor in Figure 4) is the
ed in the download zip file (available from CODEMag.com) “words->sentence” lambda expression, which uses join-
that captures several important idioms of programming in words to produce all but the final period on the sentence.
Scheme as well as a simple example of .NET interoperability. String-append is used to add that. This expression is vari-
The challenge is to compose sentences from words listed in adic, which you can tell by the fact that the sole argument is
2D regions. In the first region, the words are ordered by row, the word “words” with no parentheses around it. This means
which is Excel’s default evaluation order when feeding a re- that it can take zero or more arguments, and wraps what-
gion to a user-defined function. The second region orders ever is (or isn’t) sent up into a nice list that =apply can use.
the words by column.
Listing 3 (shown in the Lambda Editor in Figure 5) uses the
There are four listings where lambda expressions are added =eval function provided by the add-in with a lambda expres-
to the sheet, corresponding to the numbers in Figure 3. sion that, in turn, calls the built-in =apply form. The issue
here is that =apply can only accept a single list, of whatever
Listing 1 is the “join-words” lambda expression. In the sheet,
this is created using the =define function provided by the
add-in. It allows you to give a Scheme name to a lambda Listing 1: the “join-words” lambda expression
expression. The resulting expression in A7 tells you that the (lambda (words delimiter)
Scheme name and the arity (or number of arguments) it takes. (clr-static-call
The hash is computed every time the code is altered to trig- String (Join String Object[]) delimiter (list->vector words)))
ger Excel’s calculation engine (depending on what options for
auto-calculation you have set up) and can be ignored.
Listing 2: the “words->sentence” lambda expression
Note the clr-static-call form. This is one of several .NET interop (lambda words
forms that are built in, making it easy for you to call into the (string-append (join-words words " ") "."))
.NET runtime directly. In this case, you’re calling the static Join
method of the String class in the System namespace.
Listing 3: using (flatten lol) in an inline “apply” lambda
The tricky bit here is that, as you can see by the signature, (lambda lst
Join expects an array, not a list. So, you call the built-in (apply words->sentence (flatten lst)))

Brother, Can You


Paradigm?
One of the great strengths
of any Lisp is the ability to
write code according to
almost any paradigm you can
imagine—object oriented,
purely functional, relational
(aka, logic) programming, and
many more. So, whipping
up a LINQ is child’s play
compared to more advanced
uses of Scheme’s hygienic
macro system. Under the
hood, you’ll find inspirational
examples like the tinytalk.sls
library (for OO programming
in the Smalltalk tradition)
and miniKanren for logic
programming. All of these are
now fully accessible within
Office/Microsoft 365.

Figure 4: The “words->sentence” lambda expression uses join-words to construct a sentence from the “words” argument.

codemag.com The Excellent Schemer 67


Scheme is Only
the Beginning!
Scheme is a great choice for
novice programmers, and
it’s a natural fit in Office—
particularly Excel. However,
there’s more to this story!

The Professional edition of


Accelerate for Microsoft 365
will target more seasoned or Figure 5: By default, Excel uses by-row semantics for ordering cells in 2D regions. Here we flatten the region into a 1D list
professional developers and using default semantics.
enterprise use cases, and it
will also offer Clojure as an
alternative, mainly focused
on microservices and linked-
data integration. Clojure is
another great, somewhat
more modern, Lisp that has
a stealthy presence on .NET,
and of which most .NET
developers are unaware.

Figure 6: Flatten-by-column provides by-column flattening semantics. The ability to choose list ordering against 2D
regions can come in handy, and not just for Mary and her little lambda!

68 The Excellent Schemer codemag.com


type is required by the function to which it is applying the The rest of the calls use the =eval function provided by the
list. Excel is sending a 2D array or region, and so it’s neces- add-in calling the “data-mine” function that’s defined in
sary to flatten the input to pass a list as expected by =apply. Listing 5 with the appropriate arguments. These are the
data of the current block and the hash of the previous block,
Listing 4 (shown in the Lambda Editor in Figure 6) is nearly respectively.
identical, except that flatten-by-column is used instead to
obtain the correct word order in the sentence it creates. =eval(“data-mine”,A3,B2)

Although somewhat contrived, this example shows some The tutorial does a good job of developing the code, so I’ll
important idioms and how they map to conventional Excel just comment on the important highlights of Listing 5.
behavior that’s important to understand.
First, blockchain.sls is a canonical library definition per the
R6RS specification, with which VSA and the underlying Iron-
Never Block the Chain Scheme implementation is 99% compliant. There are some
The add-in comes with a tutorial walking you through creat- niche aspects of continuations—a powerful Scheme feature
ing a blockchain-like feature in Excel. Figure 7 is a screen- that allows the language to be used to program constructs
shot of the first part of that tutorial. of other languages—that simply can’t be implemented on
the CLR because of low-level restrictions.
Rather than duplicate the steps here, instead I'll briefly cov-
er the salient aspects of the code that’s produced by the end It exports and imports several symbols from the (iron-
of the tutorial (see Listing 5). The two things that should scheme clr) namespace. The various clr-* forms encoun-
be paid most attention are: tered in Listing 5 are part of a very straightforward CLR
interoperability API that you, as a VSA coder, will get to
• The direct interop with .NET via clr-* functions know quickly, because the code here exercises most of it and
• How.NET abstractions are wrapped in idiomatic is relatively easy to understand.
Scheme code in a library, conformant with the R6RS
Scheme specification The (clr-using …) form works just like the C# using or
VB.NET imports statements. It allows the library to refer-
All of the code to make this example work is stored in the ence symbols in the respective namespaces.
blockchain.sls library file (the full contents of which are
provided in Listing 5). The only code that you’ll find in the
sample spreadsheet that’s included in the download is a Listing 4: using (flatten-by-column lol) in an inline “apply” lambda
lambda expression to grab the genesis block hash in B2.
(lambda lst
(lambda () (apply words->sentence
(genesis-block-hash)) (flatten-by-column lst)))

Figure 7: Reaching into the System.Security.Cryptography namespace of .NET makes it easy to model the basic concept
and value proposition of a blockchain right inside Excel.

codemag.com The Excellent Schemer 69


The genesis-block symbol is a Scheme parameter. This is (bytes (clr-call UTF8Encoding
similar to a dynamic variable in Lisp or a global variable in (GetBytes String)
JavaScript, intended to allow a default value that is mu- utf8 str))
table, but the normal use of parameters is to give them lo- (hash-fn (clr-new SHA256Managed))
cally scoped values that don’t affect the global default value (raw-result
once the local value is out of scope. (clr-call SHA256Managed
(ComputeHash
Parameter values in Scheme are obtained by calling them System.Byte[])
like a function with no arguments. Setting them to new val- hash-fn bytes))
ues is done by passing in a single argument, the new value. (bits-str
At creation time, Scheme parameters can have “guards” that (clr-static-call BitConverter
will test any new value and disallow anything that should ToString
not be permitted as a new value. raw-result))
(clean-bits
By far the most interesting symbol in Listing 5 is the (clr-call String
string->sha256 symbol. Replace
bits-str
(define string->sha256 "-" ""))
(lambda (str) (lower-bits (clr-call String
(let* ((utf8 (clr-static-prop-get Encoding ToLower
UTF8)) clean-bits)))
lower-bits)))

Listing 5: blockchain.sls This function takes a string, and calls into the .NET System.
(library (blockchain) Security.Cryptography and System.Text namespaces to pro-
duce a SHA256 hash of the string, and is massaged to ap-
(export string->sha256 pear in the required format. It’s worth taking time to work
data-mine
genesis-block
through this tutorial, because it’s representative of what
genesis-block-hash) developers will do to surface .NET in a concise, easy-to-use
way in the context of Excel formula expressions.
(import (ironscheme)
(ironscheme clr))

(clr-using System.Text)
Anything You Can Do, I Can Do Meta
(clr-using System.Security.Cryptography) Finally, let’s take a look at a Scheme syntax extension.
(define genesis-block
(make-parameter "This is my genesis block.")) While the team was wrapping .NET APIs, it became desir-
able to iterate over collections implementing the IEnumer-
(define genesis-block-hash able interface in a cleaner syntax. Listing 6 provides the
(lambda () full code. What follows will be only a high-level description,
(string->sha256 (genesis-block))))
as an entire series of articles could be written to properly
(define string->sha256 explain how you might write code like this. It cannot be
(lambda (str) stressed enough just how powerful syntax extensions are
(let* ((utf8 (clr-static-prop-get Encoding UTF8)) in Scheme.
(bytes (clr-call UTF8Encoding
(GetBytes String)
utf8 str)) Most .NET developers will recall the sense of wonder and
(hash-fn (clr-new SHA256Managed)) awe the first time the C# compiler team released LINQ, or
(raw-result “Language Integrated Query.” Most will also remember the
(clr-call SHA256Managed
(ComputeHash System.Byte[]) gradual evolution of LINQ into the powerful, general-pur-
hash-fn bytes)) pose DSL that it is today. What I am about to say is not meant
(bits-str to diminish that sense of awe, but rather to rekindle it.
(clr-static-call BitConverter
ToString
raw-result)) In Scheme, a mostly complete version of LINQ can be imple-
(clean-bits (clr-call String mented in a weekend in about 900 lines of pure Scheme code.
Replace
bits-str "-" "")) You heard that right. This may sound blasphemous to a .NET
(lower-bits (clr-call String
ToLower developer. However, any veteran Lisper or Schemer will know
clean-bits))) that this is a fair observation. We are used to such power at
lower-bits))) our fingertips—it’s the whole point of the language.
(define data-mine
(lambda (current-block-data previous-block-hash) We know this because that’s the true story of the (iron-
(let* ((combined-data scheme linq) library that’s included as part of VSA. This is
(string-append current-block-data NOT a wrapper of .NET’s LINQ features, which are specific to
"+"
previous-block-hash))
compilers that implement them, but rather a pure Scheme
(result (string->sha256 combined-data))) implementation of generic iterators against pure Scheme
result))) collections with most of the familiar syntax sugar .NET de-
) velopers will recognize instantly.

70 The Excellent Schemer codemag.com


Unfortunately, because it wasn’t a wrapper of .NET, it left Listing 6: The built-in (define-enumerable …) syntax extension
the team in a bind. There was no way to use IEnumerable
(define-syntax define-enumerable
collections with this pure Scheme iterator implementation. (lambda (x)
This isn’t really a bind for a language like Scheme. Listing 6 (syntax-case x (as)
allowed you to bridge the gap and use .NET collections with ((_ enum-name enum-fn)
the pure-Scheme LINQ by defining pure Scheme iterators #'(define-enumerable enum-name enum-fn as IEnumerable))
wrapping the IEnumerable interface, which, on the surface, ((_ enum-name enum-fn as enum-type)
#'(define enum-name
looks completely compatible. (case-lambda
[()
(let ((iter (enum-fn)))
(make-iterator
(lambda ()
In Scheme, a mostly complete (unless iter
version of LINQ can be (set! iter
(clr-call enum-type
implemented in a weekend GetEnumerator
(enum-fn))))
in about 900 lines (clr-call IEnumerator MoveNext iter))
(lambda ()
of pure Scheme code. (clr-prop-get IEnumerator Current iter))
(lambda ()
(set! iter
(clr-call enum-type
GetEnumerator
To be fair, Listing 6 is considerably more complicated than (enum-fn))))))]
we thought it would be. It turns out, IEnumerable is not [(obj)
implemented in a uniform way for what appear to be mainly (let ((iter (enum-fn obj)))
(make-iterator
historical reasons, as the interface was treated—even by (lambda ()
Microsoft—as more of a suggestion than a contract. In (unless iter
particular, the Reset() method often produces surprising (set! iter
behavior depending on how the implementor of a collec- (clr-call enum-type
GetEnumerator
tion that supports IEnumerable decided to handle this. In (enum-fn obj))))
order to deal with this, we had to make the syntax exten- (clr-call IEnumerator MoveNext iter))
sion, named (define-enumerable …), able to take differ- (lambda ()
ent input based on the caller’s understanding of the object (clr-prop-get IEnumerator Current iter))
(lambda ()
they’re dealing with. Some classes that implement IEnu- (set! iter
merable do what you need them to do, but sadly this isn’t (clr-call enum-type
something you can take for granted. The VSA team found it GetEnumerator
necessary to provide default functionality obtaining a fresh (enum-fn obj))))))]))))))
IEnumerable instance from the parent object in which it was
defined. It’s not pretty, but it works, and you have the op-
tion to provide a specific type to cast it to when you know it
does the right thing. The following snippet is similar, except to get it to work for
triples of a Graph, you have to provide an explicit type to
Example usages below come from the (visualscheme data which it needs to be cast so that the appropriate Reset() be-
rdf core) library. havior is obtained—in this case, TreeIndexedTripleCollection.

The first example is the wrapper of the AllNodes property (define-enumerable graph/triples
of a Graph class as defined in dotNetRDF. (RDF plays an im- (case-lambda
portant role in the Professional edition but is also available [()
in the Standard edition.) Basically, what the (define-enu- (clr-prop-get Graph Triples (current-graph))]
merable…) syntax needs is a function to obtain the fresh
collection either from the (current-graph) by default, or a [(graph)
specific graph instance passed in via the one-argument ver- (begin
sion of the method. (if (or (null? graph)
(not (graph? graph)))
(define-enumerable graph/all-nodes (error
(case-lambda "graph must be a non-null Graph.")
[() (if (not (eq? graph (current-graph)))
(clr-prop-get Graph AllNodes (current-graph))] (current-graph graph)))
[(graph) (clr-prop-get Graph Triples graph))]) as
(begin TreeIndexedTripleCollection)
(if (or (null? graph)
(not (graph? graph))) These allow the user of the wrapper code to write the fol-
(error lowing:
"graph must be a non-null Graph.")
(if (not (eq? graph (current-graph))) (foreach t in (graph/triples)
(current-graph graph))) (display t)
(clr-prop-get Graph AllNodes graph))])) (newline))

codemag.com The Excellent Schemer 71


Listing 7: Some LINQ examples in Scheme
;; conformance to C# tests from y in x
;; simply permutations of the grammar and matched to the output select y))
;; of C#
(define (print-list lst) (print-list j)
(foreach x in lst
(printf "~a, " x)) (define k ( from x in nestdata
(printf "\n")) from y in x
orderby y
(define selectdata '(1 5 3 4 2)) select y))
(define groupdata '(2 5 2 4 2))
(define nestdata '((2 5)(2 4)(3 5)(3 1)(1 1))) (print-list k)

(define l ( from x in nestdata


(define a ( from x in selectdata select x into y
select x)) from z in y
select z))
(print-list a)
(print-list l)
(define a2 (from x in (from y in selectdata
select (+ y 1)) (define m ( from x in nestdata
select (- x 1))) group (car x) by (cadr x)))

(print-list a2) (print-list m)

(define b ( from x in selectdata (define n ( from x in nestdata


where (even? x) group (cadr x) by (car x)))
select x))
(print-list n)
(print-list b)
(define o ( from x in nestdata
(define c ( from x in selectdata group (cadr x) by (car x) into z
orderby x select z))
select x))
(print-list o)
(print-list c)
(define p ( from x in nestdata
(define d ( from x in selectdata group (cadr x) by (car x) into z
orderby x descending orderby (key z) descending
select x)) select z))

(print-list d) (print-list p)

(define e ( from x in selectdata (define q ( from x in selectdata


where (odd? x) join y in groupdata on x equals y into z
orderby x from w in z
select x)) select w))

(print-list e) (print-list q)

(define f ( from x in selectdata (define r ( from x in selectdata


let y = (* x x) join y in groupdata on x equals y
select y)) select (cons x y)))

(print-list f) (print-list r)

(define f2 (from x in selectdata (define s ( from x in selectdata


let y = (* x x) from y in groupdata
where (odd? y) where (and (= x 4) (= y 2))
orderby y descending select (cons x y)))
select y))
(print-list s)
(print-list f2)
(define t ( from x in selectdata
(define g ( from x in selectdata join y in groupdata on x equals y into z
select x into z orderby x
select z)) select z))

(print-list g) (print-list t)

(define h ( from x in nestdata (define u ( from x in selectdata


from y in x join y in groupdata on x equals y into z
select y)) orderby x
select (cons x z)))
(print-list h)
(print-list u)
(define i ( from x in nestdata
where (= (car x) 2) (define v ( from x in selectdata
from y in x from y in groupdata
select y)) select y into z
where (even? z)
(print-list i) select z))

(define j ( from x in nestdata (print-list v)


orderby (car x)

72 The Excellent Schemer codemag.com


CODE COMPILERS

Listing 7 provides a fuller flavor of LINQ in VSA, ecosystem of the .NET Runtime in all its incarna-
all of which will now also work with .NET types tions means that Office solution developers sud-
that implement IEnumerable if you use define- denly have access to two broad and deep solution
enumerable, as you did above, to expose them spaces that previously were unavailable. Jul/Aug 2022
to IronScheme’s iterator framework. These are Volume 23 Issue 4
borrowed from the documentation in (iron- The sturdy yet shape-shifting Scheme language
scheme linq) written by Llewelyn Pritchard, the offers a way to harness and control the raw power Group Publisher
main developer of IronScheme on which VSA is of the Common Language Runtime. It will be ex- Markus Egger
based. citing to see how industrious Office power users, Associate Publisher
VBA coders, Schemers, and .NET developers work Rick Strahl
together to leverage it and achieve greater re-
I Said All That So I Can Say This… turns for themselves and their clients.
Editor-in-Chief
Rod Paddock
This novel combination of a proven function-
Managing Editor
al programming language designed from the  Bob Calco Ellen Whitney
ground-up for metaprogramming with the vast 
Contributing Editor
John V. Petersen

Content Editor
Melanie Spiller

(Continued from 74) And if you’re in a regulated environment (SOX, Editorial Contributors
FDA, EPA, etc.), those rules and regulations, by Otto Dobretsberger
Jim Duffy
things that are contained by some other thing. and through regulatory agencies, bring many Jeff Etter
Content can be tangible like a book, painting, or considerations as to how you will distribute your Mike Yeager
computer. Content can be information. Content content.
Writers In This Issue
can be an ability. Content can be a service. And of Bob Calco Bilal Haidar
course, content can be software. In the era of the Inspection and adaptation are key concepts in Joydip Kanjilal Wei-Meng Lee
cloud, the phrase “software as a service” (SaaS) both agile and continuous improvement. That’s John Petersen Paul D. Sheriff
Shawn Wildermuth
is nearly ubiquitous. Even Office 365 is a promi- the circular aspect of the business of software
nent example of SaaS content. Whatever service or development. It's ironic that businesses often la- Technical Reviewers
product a business provides to the public to drive ment about how they must continue to keep pace Markus Egger
Rod Paddock
its earnings, that’s content. How a business deliv- with their own industries. Keeping pace often
ers that content is typically in the domain of some requires new and modified services and products Production
Friedl Raffeiner Grafik Studio
software application to at least some degree. (content). And that often means that the way we www.frigraf.it
build that software content must change as well. Graphic Layout
Agility is the means by which that software is Friedl Raffeiner Grafik Studio in collaboration
built. If this seems like a circle with no beginning Changing how we build software often gets short with onsight (www.onsightdesign.info)
or no ending, that’s because that’s exactly what shrift. It’s true that technology must serve the Printing
our world of software development is today. In business. But in order to serve the business, the Fry Communications, Inc.
order to break out of this back and forth in favor business must also serve and maintain its tech- 800 West Church Rd.
Mechanicsburg, PA 17055
of a practical tactical prescription, the manifesto nology. Although the business is the primary
itself provides a clear answer: Rely on use cases driver, each must service the other simultaneous- Advertising Sales
to drive process and tool selection. ly. That’s where the relationship between agility Tammy Ferguson
832-717-4445 ext 26
and content distribution comes in. tammy@codemag.com
In agile, we identify the items that are planned,
and their selection is informed and driven by the We must continuously search for the truth with Circulation & Distribution
General Circulation: EPS Software Corp.
items that are in response to change. our theories and abstractions. On one hand, we Newsstand: American News Company (ANC)
must widen our lens so that our abstractions have
In a previous column (CODA: On Tools and their as broad an application as possible. At the same Subscriptions
Subscription Manager
Selection in the March/April 2022 edition of CODE time, we must place limits on how abstract our Colleen Cade
Magazine), I described a system and approach for abstractions can be because there must be prac- ccade@codemag.com
tool selection to avoid what I call “tool salad.” Per- tical application. The latter requires that we also
US subscriptions are US $29.99 for one year. Subscriptions
haps a good process and approach may be to adopt narrow our lens and lower our altitude. outside the US are US $50.99. Payments should be made
domain-driven design (DDD) and a tool that sup- in US dollars drawn on a US bank. American Express,
ports DDD. For example, your business may have All of this seems rather counter-intuitive and con- MasterCard, Visa, and Discover credit cards accepted.
Back issues are available. For subscription information,
multiple service lines, each in their own domain and tradictory. If we accept that one view informs the e-mail subscriptions@codemag.com.
perhaps consisting of multiple sub-domains. Fur- other and vice versa, and we require that every-
ther, your business may have several cross-cutting thing be substantive (avoiding marketing chatter Subscribe online at
www.codemag.com
domains like HR, accounting, legal, and compliance. and quick summaries), we can undertake the seri-
To manage these facets in an agile manner, such that ous hard work of solving today’s complex problems. CODE Developer Magazine
you can respond quickly to change, requires a plan. Solving problems involves a certain amount of in- 6605 Cypresswood Drive, Ste 425, Spring, Texas 77379
Phone: 832-717-4445
formed trial and error to make a positive impact.
Perhaps that plan consists of implementing DDD And agility applied to a business’s content distri-
and DevOps. To speed delivery, you may consider bution is a means to achieve that positive impact.
continuous integration and delivery (CI/CD).
How do you implement such things? Clean code  John V. Petersen
(SOLID) programming practices are one such way. 

codemag.com CODA: Agility’s Primary Concern: Content Distribution 73


CODA

Agility’s Primary Concern:


Content Distribution
It seems that whatever business or tech-related Internet corner we find ourselves in another era of
“agility debates” is back. What is agility? What isn’t agility? I find these debates to be a waste of time, and
I spent a good amount of time early in my career engaging in them! If you’re familiar with my writing,

you understand that I’m always in search of two isolation, these two things are indeed indepen- items on the left to the entire exclusion of the
basic things. First, finding the clearest path from dent abstract things. They’re very much akin to things on the right.
the abstract to the concrete. Theory is important, concepts like design patterns and domain-driven
but only if it has practical application. Practical design. If there’s no practical application for There are infinite ways to practice agile. The only re-
should be long on value, short on ceremony. those abstract things, what good are they? quirement is that whatever mode of practice is im-
Application, to be practical and useful, must be plemented, that mode must support the manifesto
clear and straightforward. We see this with agile today. A shop proclaims, and such support must be demonstratable through
“We’re agile.” Or “We practice agility.” But all objective evidence. Too often, the manifesto is mis-
That’s the value proposition to anything. It’s that too often when we try to pin them down on how read and misunderstood to imply or explicitly state
value proposition that gets to the second item, they’re agile, we’re greeted with crickets. This that “agile means no documentation.” A plain read-
the broadest possible application. The irony here usually happens when agility is but an aspiration- ing of the manifest clearly disproves that assertion.
is that in order to find the broad application, it al thing. If we understand at an abstract level
requires making the concrete abstract. At the same what different and complementary things are, Another misunderstanding is “agile means no plan-
time, in order to make the abstract have practical how they relate to one another and in combina- ning.” If we look to established agile frameworks,
application, it must be made concrete. One feeds tion, and how they, as a combined unit, relate like scrum, that recognize sprint planning as a neces-
into the other simultaneously, like an Escher draw- to other things, then perhaps then we can begin sary event, again, we can easily conclude that agility
ing with no beginning and no end. It’s more like to understand how these different things can be and planning are not mutually exclusive and antago-
the two sides of a coin, which is part of the pri- joined as two sides of a coin to fuel the business. nistic to each other. To the degree that we engage
mary currency in business and technology today. with planning, that planning should be tempered by
Generally, necessary things in business come in Let’s try that with agility and content distribu- what we engage in while responding to change.
pairs—complementary pairs, much like tactics and tion here.
strategy. Tactics must support and be informed If there were no plan, all we’d do is react, not
by a coherent strategy and any coherent strategy respond. Reactions are quick and reflexive, not
must be tempered by coherent actionable tactics. Agility considered and thoughtful. It’s through interac-
Agility is defined in the Agile Manifesto (ag- tions with our customers that we plan enough to
ilemanifesto.org), which is a set of principles. build working software. A document that purports
That’s a very important point. In other words, to articulate what proposed software is to accom-
Agility and content Agile isn’t what we think it is. Agile is what it plish is just that, a document. The document may
distribution have become says it is, as defined in the manifesto. Agile isn’t deliver a good, aspirational story, but it doesn’t
a practice, it’s a collection of principles. To prac- yield the same value that working software would
quite misunderstood. tice Agile principles, there must be some con- That story, at best, is a means to some other end.
crete application. Agile principles state: As a communication device, there is most certainly
value. The end result must be some value proposi-
• Individuals and interactions over processes tion that the business will realize and understand:
That leads to the two things I’m keenly interested in and tools that is, delivered and working software (working
today: agility and content distribution, two things • Working software over comprehensive doc- defined as conforming to a defined specification)
that have largely become the stuff of middle- umentation
management marketing reductionism. Both agil- • Customer collaboration over contract ne- Serving the business is what technology is sup-
ity and content distribution, like blockchain, have gotiation posed to be about. And agility is one means by
become misunderstood, despite their importance, • Responding to change over following a plan which that can be done. And that means can only
and despite the substance they represent. This have efficacy if it’s properly matched with what
pair, I believe, is a good example of the two-sided The Agile Manifesto concludes with: …[w]hile the business does. In my opinion, every busi-
coin metaphor and, when broken down, illustrates there is value in the things on the right, we ness’s value proposition is found in how it dis-
what I further believe to be the proper lens through value the items on the left more. The “left” tributes its content.
which to view the intersectionality of business and and “right” here are the things that are stated
technology and therefrom, realize the value propo- before and after the word “over” respectively.
sition that agility and content distribution present. Ultimately, what is of acceptable value is de- Content Distribution
termined by many things that, among other Traditionally, we see the phrase “content distribu-
From a practical standpoint, one or the other things, are specific to your firm’s and the proj- tion” solely in the media context. Think HBO Max,
cannot be viewed in isolation. It’s the same line ect’s context. The big point is that too often, Tik Tok, Netflix, etc. “Content” is simply those
that’s between practice and theory or, put an- the manifesto’s concluding sentence is forgot-
other way, the concrete versus the abstract. In ten. Put another way, we don’t only consider the (Continued on page 73)

74 CODA: Agility’s Primary Concern: Content Distribution codemag.com


SEP. 6-8, 2022
CAESARS FORUM / LAS VEGAS

VERTICAL FOCUS.
GLOBAL REACH.

LEARN
Expansive education program with
solutions-oriented presentations & workshops
from UAS thought-leaders

CONNECT
Facilitated networking, matchmaking, and
focused roundtables, with drone industry
professionals from around the globe

Registration
is open! EXPERIENCE
Use code SAVE100 for Cutting-edge UAS solutions providers,
$100 off a Full Conference Pass live outdoor drone demonstrations
or a FREE Exhibit Hall Pass. & exclusive training
expouav.com

Produced by Diversified Communications

THE COMMERCIAL UAV EVENT FOR:

Construction Drone Energy Forestry Infrastructure Mining Public Safety Security Surveying
Delivery & Utilities & Agriculture & Transportation & Aggregates & Emergency Services & Mapping
shutters
tock/Lu
cky-pho
tograp
her
NEED
MORE OF THIS?

Is slow outdated software stealing way too much of your free time? We can help.
We specialize in updating legacy business applications to modern technologies.
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/modernize
832-717-4445 ext. 9 • info@codemag.com

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy