Are you trying to learn to code? Or perhaps you're an educator or a student, or you know someone who us? Sometimes it's intimidating when you consider all the things to install and run to get started.
Well, we've created a series of all-in-one installers - coding packs - that will set you up in Python, Java, or as of today - C# and .NET!
ACTION:Register free for BUILD 2021 here! Why register? You don't have to, you can just visit the site and watch the stream if you like, but if you register you can ask questions LIVE and interact LIVE with the team!
Also, be sure to sign up Microsoft Build Cloud Skills Challenge! You complete modules in Microsoft Learn, learn some cool stuff AND (believe it or not) you can win a meeting with me! How weird is that? Someone thought I'd be a prize to someone. My wife is unimpressed. Regardless, you can win meetings - consultations if you're fancy - with me and other Microsoft Engineers, so sign up today!
See you soon! I miss y'all and I hope you and your families are safe and OK.
Sponsor: Extend your runway and expand your reach. Oracle for Startups delivers enterprise cloud at a startup price tag with free cloud credits to help you reel in the big fish—confidently. Learn more.
And you know that you've been able to run Linux server apps on Windows for a long time. There's even support in VS Code and VS2019 to debug those apps.
But how far can we take this? What about Debugging a .NET web app running under Linux while running Visual Studio 2019 for Windows and accessing that web app via a Linux Browser?
Why? Why the heck not? Seriously though, because choice and flexibility. If this solution isn't interesting to you, then perhaps you don't have this problem. But if you do have this problem, then here's the solution. Welcome!
Prerequisites
At some point soon, WSL and WSLg will be a part of the mainline of Windows, but at the time of this writing they are inside Windows 10 Insiders 21362+. Follow the instructions here to setup WSL2 and WSLg. This assume you're running a distro like Ubuntu.
Then add a browser like Edge for Linux or Chrome for Linux as below.
You'll know they are working and installed when the Linux GUI apps show up in the Windows Start Menu.
Adding a Linux Browser to Windows Visual Studio 2019
Open up Visual Studio 2019, and either open up or create a Web Application. From Debug button there's a dropdown (chevron) where you access this menu:
Select "Browse With..."
From the Browse With dialog, you're going to add a new Browser, selecting "C:\Windows\System32\wslg.exe" as the Program and "~ -d Ubuntu /usr/bin/microsoft-edge-dev" as the Arguments. Ignore any errors.
You should see the new Browser inside Visual Studio 2019 now and can select it like any other browser.
Boom. Here I am running my Podcast website under Linux on .NET 5 on the server-side AND on the client-side in the Edge Browser as a Linux GUI app!
Sponsor: Build your apps where your customers are. Oracle for Startups delivers enterprise cloud with no lock-in so you can go after any customer—confidently. Learn more.
We had a great time at the BUILD 2021 conference this year. My team and I worked really hard to put together a great Application Development Keynote and I think we did it in a way that might surprise you. Check it out!
There has never been a time where developers had more access to tools and services to be more productive. Join Scott Hanselman and some of his friends show all the innovative ways for developers to be successful.
00:00 - Application Development with Scott Hanselman & Friends Keynote
10:50 - You can easily apply multiple repeated edits to your codebase in Visual Studio 2019 using IntelliCode. Check out the link for more information: https://devblogs.microsoft.com/visual...
12:00 - Blizzard builds and debugs production issues in Diablo IV using Visual Studio 2019 and Windows Subsystem for Linux. To learn more about this great partnership, go here: https://devblogs.microsoft.com/cppblo...
18:25 - The Visual Studio team is working on making the next version even better, including making it 64-bit. Read this blog post to learn more: https://devblogs.microsoft.com/visual...
Sponsor: Build your apps where your customers are. Oracle for Startups delivers enterprise cloud with no lock-in so you can go after any customer—confidently. Learn more.
There's something happening in the E Ink space, somewhat quietly, but consistently. It's going to be interesting to see if it's a fad or if E Ink tablets are here to stay. I love my Amazon Kindle and I love its E Ink display. I'd say 90% of my reading in the last 5 years has been on a Kindle with E Ink. They are bright in direct sunlight, and the newer ones have color temperature settings. The starter Kindle is about $90 and you'll often find sales.
For mostly static content like books or magazines, E Ink is an amazing paper-like technology. We seem to be putting a huge amount of technology and work into creating displays to replace paper. First the look, and most recently the feel of writing on paper. These one page digital devices promise to act as Infinite Paper.
E Ink is easier on the eyes than OLED and iPads and the like. How does it work? The simple explanation is that there's tiny capsules of negatively charged black pigment and positively charged white pigment. We can apply negative or positive charge and the black or the white pigments will jump to the top. It's kind of like an Etch a Sketch, except with electricity rather than a surface covered in aluminum powder. These displays are as close to paper as you can get, today, digitally.
The reMarkable 2 - This is the second-gen reMarkable. It's a dedicated and distraction-free note taker. It has no browser, no apps to speak of, but an enthusiastic community of hackers and 3rd party projects. This device is NOT an iPad and if your first thought is, "but I have an iPad" then this isn't for you. However, if you like Moleskine notebooks and have filled many a year and your shelves are filled with many years' worth, then take a good look. This 10.3 inch unlit screen is the best device for taking notes, reading PDFs, and...taking notes. It's incredibly well built, feels high quality, is light but substantial, it doesn't warp or feel cheap. If you pair it with a their Marker Plus that includes an eraser, the feeling is top notch. It has a great Desktop App that also has a Beta "Live View" feature where you can share your screen in Teams or Zoom and see what you're writing on your reMarkable. There's so much potential here if they'd open up the APIs and integrate into things like OneNote, Teams, etc. I'd love to see someone be able to connect two of these and write as a shared whiteboard!
- One small downer, I did drop a Marker and it landed just right and broke off not just the tip (no big deal, it comes with a dozen replacements) but also the tiny hole the tip goes into (not replaceable). So, treat the pens with reverence.
Onyx Boox Note Air - This good pass for the reMarkable from a distance, but it's actually an Android 10 devices that can have Google Play added. Also 10.3" and E Ink but adds a backlight, this hybrid device is a note taker and PDF viewer until you are suddenly installing Microsoft Office or Netflix. The surreal part is that what the device thinks its displaying doesn't always jive with what is being displayed. For example, it's a black and white device, so some shading and subtleties are lost...but they are there, in video memory. That means you can easily share this Android Device's screen to your TV or monitor and it's...Android! There is some ghosting which is a feature, not a bug, but the Onyx Boox Note Air has a surprisingly large array of basically "ghosting display choices" that allow you to select the right balance between ghosting and eventual consistency. It takes a moment to figure out but it's quiet good when dialed in. Combine the Note Air with a Bluetooth Keyboard and you've got an E Ink Word Processor. If you have $500 and can't device between a reMarkable and a Boox Note Air, it comes down to the fact that the Note Air is Android. You're getting more functionality, if slightly less software policy. As a note taker, the polish of the reMarkable 2 is the winner. But the Note Air is the best general purpose E Ink Tablet.
Onyx Boox Nova3 Color - This device is just 7.8" but has a color Kaleido Plus E Ink screen. COLOR E Ink is really something to see. Do check out my video review on YouTube - here's a link right to the color parts. It's not a rich deep marker type color, it's a muted older comic book type color...but it works. It adds something, and reading comics on it in Comixology is magical, albeit with some ghosting. This device is also Android so consider it a 2 inch smaller version of the Note Air. It's the "color iPad Mini" to the Note Air's "black and white iPad Pro."
Later this month I'll take a look at Supernote which already has a enthusiastic community and promises to have a rich API for 3rd parties to explore and expand.
E Ink and "E Paper" are becoming more prominent on sites like Kickstarter and IndieGogo. This India-based company called paperd.ink is creating a low power E-paper development board. The rise of inexpensive E-paper/E Ink displays along with ESP32s with WiFi is creating tiny low power computers that blur the user's perception of what a Microcontroller can do.
What are your thoughts and opinions about E Ink? Will your next tablet be an E Ink display?
I often use Amazon Affiliate links and you're helping this blog when you use them, thanks!
Sponsor: Extend your runway and expand your reach. Oracle for Startups delivers enterprise cloud at a startup price tag with free cloud credits to help you reel in the big fish—confidently. Learn more.
Then install Jon Sequeria's "dotnet repl" with this one line global tool install:
dotnet tool install --global dotnet-repl
Then just type dotnet repl at the command line. Use the Windows Terminal ideally. That will drop you here!
With .NET Interactive/.NET Notebooks at the heart, consider this command-line experimental REPL (Read Evaluate Print Loop) to be a text-based notebook!
Start typing! If you make a mistake and press enter, type Ctrl-UpArrow to bring that line down to try again.
You can even add NuGet packages with #r "nuget:YourPackage"
Go learn more and give feedback at https://github.com/jonsequitur/dotnet-repl. You can even run .NET Notebooks with this, as a script! This REPL supports #F and C#. Love it.
Sponsor: Extend your runway and expand your reach. Oracle for Startups delivers enterprise cloud at a startup price tag with free cloud credits to help you reel in the big fish—confidently. Learn more.
One of the best parts of the .NET ecosystem is the excitement around experimentation. Someone is always taking .NET to the next level, trying new things, pushing the envelope.
Michal Strehovsky has an interesting experiment on his GitHub called "bflat." This is not a product, it's a playground.
bflat is a concoction of Roslyn - the "official" C# compiler that produces .NET executables - and NativeAOT (née CoreRT) - the experimental ahead of time compiler for .NET based on CoreCLR's crossgen2. Thanks to this, you get access to the latest C# features using the high performance CoreCLR GC and native code generator (RyuJIT).
bflat merges the two components together into a single ahead of time crosscompiler and runtime for C#.
I find this characterization funny:
bflat is to dotnet as VS Code is to VS.
Michal is basically stripping .NET down to the bare minimum and combining the official compiler and and the experimental AOT (Ahead of Time) compiler to make single small EXEs that are totally self-contained.
Michal says you can get involved if you like!
If you think bflat is useful, you can leave me a tip in my tip jar and include your GitHub user name in a note so that I can give you access to a private repo when I'm ready.
Hello World today is about 2 megs. He says it's because:
By default, bflat produces executables that are between 2 MB and 3 MB in size, even for the simplest apps. There are multiple reasons for this:
bflat includes stack trace data about all compiled methods so that it can print pretty exception stack traces
even the simplest apps might end up calling into reflection (to e.g. get the name of the OutOfMemoryException class), globalization, etc.
method bodies are aligned at 16-byte boundaries to optimize CPU cache line utilization
(Doesn't apply to Windows) DWARF debug information is included in the executable
Sure, it's not C code because it'll never be C code. You get access to a LOT MORE with C#.
This could be a useful system for creating tiny apps in C# for Linux or Windows command line administration. It also showcases how the open pieces of .NET can be plugged together differently to achieve interesting results.
I'm sure there's lot of AOT limitations around Reflection, Attributes, and more, but this is still a very cool experiment, go check it out at https://github.com/MichalStrehovsky/bflat!
Sponsor: Pluralsight helps teams build better tech skills through expert-led, hands-on practice and clear development paths. For a limited time, get 50% off your first month and start building stronger skills.
Then, after running code $profile or nodepad $profile, add
Import-Module PSReadLine
Sure, but next, add these:
Set-PSReadLineOption -PredictionSource History
Set-PSReadLineOption -PredictionViewStyle ListView
Set-PSReadLineOption -EditMode Windows
This means that PSReadLine (and hence, your prompt in general) will use your prompt history to make predictions on what you want to see next. These predictions can be on one line in light gray (full details on Jason's blog) but I like them to pop down in a ANSI style ListView. Then you can edit them with up and down arrows (or Emacs or VI soon).
I'm loving PSReadLine an will be doing a video on setting up your best prompt soon.
Sponsor: Pluralsight helps teams build better tech skills through expert-led, hands-on practice and clear development paths. For a limited time, get 50% off your first month and start building stronger skills.
"Core isolation is a security feature of Microsoft Windows that protects important core processes of Windows from malicious software by isolating them in memory. It does this by running those core processes in a virtualized environment.
Memory integrity is one feature of core isolation which regularly verifies the integrity of the code running those core processes in an attempt to prevent any attacks from altering them.
We recommend that you leave this setting on, if your system supports it."
Cool. Turns out this was added way back in 2017 in Windows 10 build 17093. I ran the Windows Security app on my system and noticed a few things. First, at the bottom it says "Your device meets the requirements for standard hardware security" but this can read "...for enhanced hardware security."
Some of these technologies are quite old and have been in Windows for a while. It's the collection of all them together, working as a team, that enhances your systems security. Virtualization-based Security (VBS) isolates a secure region of memory from the rest of the OS.
I started digging to understand what was interesting or unique about my system that was preventing me from turning these new features on. Additionally I wanted to make sure I was ready for Windows 11 whenever it arrives and adds more security features and requirements.
Go to the Windows Security app and click Device Security.
I clicked on Core Isolation to turn on VBS and noticed that the on/off switch was grayed out and I could scan for driver incompatibilities. I want to ensure that drivers I have loaded into the kernel are secure. Windows 10 has a feature called Device Guard and drivers need to be written in certain ways to ensure they have a clear separation between data and code, and can't load data files as executable, or use dynamic code in the kernel. Again, NONE of this is new and goes back as far as 2015 or earlier.
What do I have installed? Well, friends, a ton of crap, it turns out! LOL. All off these drivers are either super old or are using insecure coding techniques that are preventing my system from turning on the Core Isolation Memory Integrity feature.
I can start searching for each of these and I see a few interesting culprits. Remember, these are all either old or poorly written drivers that are loaded into the kernel on my desktop machine, chillin'.
That Western Digital one? Notice that it evens says "_prewin8.sys" so I hope someone from WDC reads this blog and feels just a little bit bad about it. This is from an external USB hard drive. I certainly don't need whatever extra feature that driver lights up. My USB Hard drive is just fine without it.
The STT*.sys and S3x*.sys drivers are all from various Arduino COM Port utilities and DFU-util firmware flashers. Remember those unsigned warnings you thought nothing of years ago? Well, those drivers are still with you...I mean, me.
It's easy to look for "Windows Driver Package" and line up some of these drivers with actual installers and remove from Add/Remove Programs.
However, since I do a lot of IoT stuff and install random INFs manually...many of these drivers won't show up in ARP (Add/Remove Programs).
I could use Autoruns.exe and click the Drivers tab, but not every one shows up there, and even if you uncheck a driver here it won't be removed from the Windows Security Scan. It needs to be uninstalled and deleted.
For visible drivers, I can open Device Manager and look at the Driver details for each one.
If the .sys file matches, I can right click uninstall and check the delete checkbox to remove the driver entirely.
This NDI Webcam Input (NDI Virtual Input) driver knowledge base literally tells you to turn off Secure Boot and turn off Memory Integrity to install their unsigned driver. No thanks.
From an admin command line you can get a list of drivers. This one gets a list in PowerShell and puts it in your clipboard.
get-windowsdriver -online | clip.exe
While this one works anywhere and gets a simple list:
wmic sysdriver get name
TL;DR - Find the oem.inf from the Incompatible Drivers list and remove it at the Command Line.
But when you have the list from the Incompatible Drivers scan as seen in the screenshot above, just click each driver and you'll see the "oemXX.inf" file that describes the driver. Note your numbers will vary.
pnputil /delete-driver <example.inf> /uninstall
Then you can use pnputil that comes with Windows to delete the driver package from your system's driver store. Here is me doing that:
Do be conscious of each driver and what it does and consider what functionality - if any - you'll be losing if you remove them. If this blog post or specifically, you following the directions of this blog post, renders your machine unusable or unbootable, I'm sorry but you gotta do your research and back up your system. You should be able to turn it off and reinstall, but still, be careful.
Now I'm all set:
And my system says "meets the requirements for enhanced hardware security." Sweet.
Hope this helps you and sets you up for future success. I did a LOT of searching to figure this out and spent many hours to break this down for y'all.
Sponsor: This week's sponsor is...me! This blog and my podcast has been a labor of love for over 18 years. Your sponsorship pays my hosting bills for both AND allows me to buy gadgets to review AND the occasional taco. Join me!
.NET Core is open source and cross platform and Remarkable.NET is build on .NET Core 3.1 which has binaries for ARM32 which sets us up nicely for use on the Remarkable tablet.
On my Remarkable I can go to Settings | Help | Copyright and Licenses to see my IP address and SSH root password.
Now I can scp and ssh root@192.168.1.71 and enter the password. Note that ssh and OpenSSH is built into Windows 10 so you don't need to install putty or WinSCP but feel free if it makes you happy. Just a reminder though that Windows has OpenSSH or you can use WSL on Windows 10.
After downloading .NET Core 3.1 to my local machine I use scp to copy to /home/root then I ssh into the Remarkable Tablet. Of course, your IP will be different.
I’m not going to lie the UX for new people isn’t perfect. Basically a new version dropped whilst the Toltec devs were mid re-architecture. Step one backup your root password and drop on a key!
2. Install Toltec but use the testing repo. Stable basically doesn’t work - I have already queried the nature of ‘stable’
3. Install the display package. This include r2fb which is a lib with hooks that make rm1 apps work on rm2. The community has kinda standardised on using the rm1 display API.
4. Install the display package. This include r2fb which is a lib with hooks that make rm1 apps work on rm2. The community has kinda standardised on using the rm1 display API. (Ed. Note. Nice to know .NET 5 works also!)
5. Build the remarkable.net sandbox binary as Release/ARM. Copy to device and run with rm2fb-client dotnet Sandbox.dll
7. To see anything on screen you’ll have to go through my ramblings on discord. Basically find the DllImport references to libc and change this to point to /opt/lib/librm2fb_client.so.1.0.1
Also, to run your app shutdown xochitl otherwise they’ll both fight over the display and make a mess. Ideally you would use a launcher to manage that but as I’m just debugging I run from CLI. Don’t disable xochitl, that way a reboot will fix anything bad.
I'm still exploring but I'm enjoying the ride! (as always, no warranty express or implied!)
NOTE: If you mess up your remarkable messing around with Toltec or think you Bricked it (again, don't complain to me, please) you can connect it with USB and ssh root@10.11.99.1 locally and once you are there, there is a great thread here on how to uninstall Toltec.
Have fun, be safe.
Sponsor: Pluralsight helps teams build better tech skills through expert-led, hands-on practice and clear development paths. For a limited time, get 50% off your first month and start building stronger skills.
I'm sure you have a ton of tips and learnings on how to create inclusive meetings where some people are remote and some not. Do you happen to have it in written somewhere?
We are discussing what guidance and technology we could use for the teams when coming back to a hybrid world, where meetings will surely have people connected remotely. For example, we were wondering how we can take some things from remote meetings like the chat window – which actually makes so much easier for everybody to participate – to this hybrid world (maybe projecting in the room, maybe assigning somebody to voice comments, etc.). Other areas we are discussing: how to deal with whiteboarding, how to avoid communication not flowing for remote people, recording meetings for people in different time zones…
I haven't written anything on Hybrid meetings where some folks are remote and others are starting to go back into the office. Fortunately, Mads Torgersen on the team is slowly making his way back into the office and has offered me these words to share with you, Dear Reader! I've paraphrased and edited this some as well. Thanks, Mads!
Mads: Last week I held a hybrid meeting! Which means that I was in the conference room with other people (ok, one other person), and the rest participated remotely via teams. The explicit purpose of the setup was to start gaining experience and learning the tricks for when there are folks back in the office on a more regular basis in phase 6.
This is to share my initial experiences, and encourage any conversation or tips other people have picked up. Feel free to share. There is no formal follow-up, and I know there are conversations around this going on in multiple places; it just feels to me like [a good place to start a] conversation at this point.
The conference room had the usual equipment of a projector and a room camera, ambient audio and a control panel in the middle of the table running a Teams client.
Scott: While we are using Teams at work, much of these tips can be used with Zoom and other video conferencing software.
First do no harm: Mads: The most important goal is to never go back to remote participation being a second-class experience! The remote experience in Teams needs to not deteriorate even one little bit when a conference room joins in. This means that everyone in the room should also be joined to the Teams meeting. Bring a laptop or other Teams-enabled device, turn off audio input and output on it (the room will take care of that) and use the Teams features as you would as a remote participant: Raise hands (Best. Feature. Ever!), participate in chat, send reactions.
Scott: If you're using Zoom or don't have a TV or room system, you can have everyone with laptops in the room join the meeting so their faces are shared, then have just one central person have their mic and speakers on. The idea is to allow the folks who are remote to see the context of what's happening and react to facial expresses as if they were in the room!
Create the space: Mads: At the same time, once several participants are coming to the office again, I think we should be careful not to create a force away from the office, making people stay at home just so they can go to meetings. If you don’t include a room in your meeting, you are compelling people to disturb their team room mates, scramble for sparse focus rooms or give up on coming in. The meeting room isn’t just a nice way to get together (though that is nice!), it is simply the most efficient, realistic and best way for on-site folks to participate in a meeting. So: come phase 6, start adding those meeting rooms again!
Scott: This suggestion won't apply to every company, as not every Enterprise has the idea of 'inviting a room.' This is a good tip though if you have a physical shared space back in the office AND that room can be invited so that you're not joining Teams/Zoom on laptops but with the Poly/TV or shared devices in the office room.
Placement in the room: The meeting leader (or in-room designate) needs to sit next to the [main central] Teams panel, so as to use it actively during the meeting (see below). We experimented with where to face. There’s a conflict between looking at your screen and looking at the projected output, but there’s also an efficiency in being able to have those two screens show different things. Also, it’s distracting for remote participants to see in-the-room folks "from the side” on either the room feed or the individual cameras.
We therefore landed on turning our laptops so we would face them in the same direction as the big screen and room camera. That way folks always see you from the front, you don’t have to turn your head between the shared and private screens. An odd downside (especially when more people are in the room) is that folks physically together don’t face each other! I’m still curious to see how this plays out with half-and-half or even majority in-room participants. But don’t forget to do no harm: Remote folks should not feel as if local folks are huddled in a circle and they are standing outside looking at people’s backs. Teams is the primary meeting venue and the physical room is secondary.
A possible other downside to being turned somewhat sideways is ergonomic. This is the same as when someone is giving a presentation and you’re not optimally seated. The emerging social contract here should come with enough wiggle room for folks to be physically comfortable through long-haul meetings.
Scott: What's important here isn't the implied prescription of what directions to face, but that Mads is making a conscious effort to be actively inclusive. He's trying new things and mixing up camera angles so that folks who are remote are present and included in the meeting.
Leading the meeting: Mads: Many of us have several screens at home, and it’s useful to keep track of all the moving parts across a lot of screen real estate. Having just your laptop can be quite limiting, but the Teams client [Scott: or shared TV] in the room can help a lot. First of all, if the room is not invited to your meeting (maybe you have the room invite separate like I do), it’s easy to call the room from the Teams meeting on your laptop, then "pick up” on the panel (or have someone in the room do it if you’re remote). From then on, the room is "in" the meeting.
The panel lets you pick different screen layouts for what is projected, and you can use that to differentiate between what’s on the shared and private screens, clawing back real estate. What worked well for us was to project just the faces ("Gallery Mode”) on the big screen; when something was being shared you could read it better on your private screen anyway, and having remote folks’ faces bigger on the wall made for a much better sense of "connection” and reminder of their presence in the meeting. If you’re leading the meeting remotely, have someone in the room be the designated panel operator.
The panel also shows the participant list in hands-raised order like your own Teams client does, and that frees up even more real estate for the meeting leader, if you’re in the room.
Finally the panel has a spare "raise hand” button for the room, so if you end up with one or two in-room folks who for some reason can’t participate on Teams (maybe they don’t have a laptop), you can have them sit nearby and let them use that to raise their hand during the meeting.
All in all this was a much better experience than I expected. I felt I had the tools I needed to run a good meeting for everyone involved, keeping the experience as good for remote folks, and making it pretty decent for those in the room. As more people get in, a lot is going to ride on good habits, so that remote people continue to be fully included and empowered.
I hope that was useful! Any thoughts, additional or countervailing experiences etc, I’d love to hear them! Together we’re gonna nail this hybrid thing!
Scott: What are your best tips and tricks for good hybrid meetings?
Sponsor: Pluralsight helps teams build better tech skills through expert-led, hands-on practice and clear development paths. For a limited time, get 50% off your first month and start building stronger skills.
I used to call this technique "type tunnelling" and noted its use in XML in 2005. When you are using a strongly typed language but instead your types are stringly typed, you are passing strings around when a better type exists.
Here's some examples of stringly typed method calls:
Robot.Move("1","2"); //Should be int like 1 and 2
Dog.InvokeMethod("Bark"); //Dispatching a method passing in a string that is the method's name. Dog.Bark()
Message.Push("TransactionCompleted"); Could be an enum
There's reasons to do each of these things, but as a general rule your sense of Code Smell should light up if you smell Stringly Typed things.
Inline SQL is another where one language (a proper language with Syntax) is tunneled as a string within another. There's no good solution for this as most languages don't have a way to express SQL such that a compiler could noticed a problem. Sometimes we'll see Fluent APIs like LINQ try to solve this. RegEx is another example of a string language within a language. Sometimes one will see large switch statements that fundamentally change program flow via "magic strings." One misspelling and your switch case will never fire.
Again, these have valid reasons for existence but you won't catch syntax issues until runtime.
LinqPad has a great post on why strongly typed SQL via LINQ or other fluent syntaxes are often better than SQL. Here's some LINQ in C# that will eventually turn into SQL. You get autocomplete and syntax warnings throughout the authoring process:
from p in db.Purchases
where p.Customer.Address.State == "WA" || p.Customer == null
where p.PurchaseItems.Sum (pi => pi.SaleAmount) > 1000
select p
So why does it matter?
Regex rx = new Regex(@"\b(?<word>\w+)\s+(\k<word>)\b");
This isn't to say all Stringly Typed code is bad. It's to say that you need to make sure it doesn't just happen on its own. Be prepared to justify WHY it was written that way. Is string the only data type the app uses? Are there potential uses where something should be a Message or an Event or a Something and it was just easier or simpler to use a string? And here's the rub - was this Stringly Typed data structure pass to another component or service? Did you intend for its semantic meaning to be retained across this logical (or physical) boundary?
A great litmus test is "how would I catch a misspelling?" Compiler" Unit Test? Production ticket?
What do you think about Stringly Typed code? Do we type Name and Surname? Is that too far? Do we string all the things?
Sponsor: Pluralsight helps teams build better tech skills through expert-led, hands-on practice and clear development paths. For a limited time, get 50% off your first month and start building stronger skills.
I write about minimal Web APIs in 2016 and my goal has always been for "dotnet server.cs" to allow for a single file simple Web API. Fast forward to 2021 and there's some great work happening again in the minimal API space!
Let's do a 'dotnet new web' with the current .NET 6 preview. I'm on .NET 6 preview 7. As mentioned in the blog:
We updated .NET SDK templates to use the latest C# language features and patterns. We hadn’t revisited the templates in terms of new language features in a while. It was time to do that and we’ll ensure that the templates use new and modern features going forward.
The following language features are used in the new templates:
Top-level statements
async Main
Global using directives (via SDK driven defaults)
File-scoped namespaces
Target-typed new expressions
Nullable reference types
This is pretty cool. Perhaps initially a bit of a shock, but this a major version and a lot of work is being done to make C# and .NET more welcoming. All your favorite things are still there and will still work but we want to explore what can be done in this new space.
Richard puts the reasoning very well:
The templates are a much lower risk pivot point, where we’re able to set what the new “good default model” is for new code without nearly as much downstream consequence. By enabling these features via project templates, we’re getting the best of both worlds: new code starts with these features enabled but existing code isn’t impacted when you upgrade.
This means you'll see new things when you make something totally new from scratch but your existing stuff will mostly work just fine. I haven't had any trouble with my sites.
Let's look at a super basic hello world that returns text/plain:
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
if (app.Environment.IsDevelopment()){ app.UseDeveloperExceptionPage(); }
app.MapGet("/", () => "Hello World!");
app.Run();
Slick. Note that I made line 3 (which is optional) just be one line to be terse. Not needed, just trying on these new shoes.
If we make this do more and support MVC, it's just a little larger. I could add in app.MapRazorPages() if I wanted instead of MapControllerRoute, which is what I use on my podcast site.
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllersWithViews();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
Back to the original Web API one. I can add Open API support by adding a reference to Swashbuckle.AspNetCore and then adding just a few lines:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
Anuraj has a great blog where he goes deeper and pokes around David Fowlers GitHub and creates a minimal WebAPI with Entity Framework and an in-memory database with full OpenAPI support. He put the source at at https://github.com/anuraj/MinimalApi so check that out.
Bipin Joshi did a post also earlier in June and explored in a bit more detail how to hook up to real data and noted how easy it was to return entities with JSON output as the default. For example:
Dave Brock did a tour as well and did Hello World in just three lines, but of course, you'll note he used WebApplication.Create while you'll want to use a Builder as seen above for real work.
Is this just about number of lines of code? Have we moved your cheese? Will these scale to production? This is about enabling the creation of APIs that encapsulate best practices but can give you the "middleware-like" performance with the clarity and flexibility that was previous available with all the ceremony of MVC.
Have fun! Lots of cool things happening this year, even in the middle of the panini. Stay safe, friends.
Sponsor: Pluralsight helps teams build better tech skills through expert-led, hands-on practice and clear development paths. For a limited time, get 50% off your first month and start building stronger skills.
Hey friends! I wanted remind you about my podcast! It's http://hanselminutes.com/ and just a few weeks ago I published my 800th episode! My first episode was in January of 2006 so that's over 15 years of shows. And, if I may be a little boastful for a moment, they are pretty darn good. Maybe the first 400 were a little rough but these last 400 have been ROCK SOLID. Just kidding.
Seriously, though, this 30 minute long tech show has diverse topics and new faces you haven't heard on other podcasts. If you check out over 800 episodes here https://www.hanselminutes.com/episodes you can search by Title, Guest, OR search all the Transcripts! There's over 400 hours of shows and you can search for the topics you want.
Subscribe with your favorite podcast app, the raw RSS is here. We're also available on basically every podcast app out there, including, but not limited to:
If you enjoy the show, the best thing you can do to help me is SPREAD THE WORD! Tell a friend, share and episode or favorite code, but above all GET FOLSK TO SUBSCRIBE.
The world is littered with podcasts that gave up after 9 episodes. There's a ton of average talks shows that ramble on. I've worked really hard - at night, as this is not my day job! - to not only bring you the best guests, but to read their papers, books, and thesis, and ask the questions that YOU would have if you were here with me!
Sometimes I even put the Hanselminutes Podcast on YouTube and the results are truly special and heartbreakingly emotional.
Thanks for listening, and thanks for sharing!
Sponsor: YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt
With .NET 6 on the near horizon, one notes that Carter has a net6 branch. Per their website, this is the goal of the Carter framework:
Carter is framework that is a thin layer of extension methods and functionality over ASP.NET Core allowing code to be more explicit and most importantly more enjoyable.
As of today you can bring Carter into your .NET 6 projects like this:
Here's as simple Web API sample with Carter that returns a list of actors at localhost:5001/actors
using Carter;
using CarterSample.Features.Actors;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<IActorProvider, ActorProvider>();
builder.Services.AddCarter();
var app = builder.Build();
app.MapCarter();
app.Run();
Nice! This is using new .NET 6 features so there's no Main(), it's implied. The builder has an ActorProvider added as a Singleton. I bet we'll use that when we ask for /actors in our browser or favorite HTTP API client.
public class ActorsModule : ICarterModule
{
public void AddRoutes(IEndpointRouteBuilder app)
{
app.MapGet("/actors", (IActorProvider actorProvider, HttpResponse res) =>
{
var people = actorProvider.Get();
return people;
});
...
}
}
This is nice and clean. Everything is using Dependency Injection so no one is "newing up" an Actor. You'll note also that returning the Actors as JSON is implied when we return the IEmumerable<Actor> that comes from actorProvider.Get().
In fact, the whole Actor Module is just 80 lines so I'll include it here:
public class ActorsModule : ICarterModule
{
public void AddRoutes(IEndpointRouteBuilder app)
{
app.MapGet("/actors", (IActorProvider actorProvider, HttpResponse res) =>
{
var people = actorProvider.Get();
return people;
});
app.MapGet("/actors/{id:int}", (int id, IActorProvider actorProvider, HttpResponse res) =>
{
var person = actorProvider.Get(id);
return res.Negotiate(person);
});
app.MapPut("/actors/{id:int}", async (HttpRequest req, Actor actor, HttpResponse res) =>
{
var result = req.Validate<Actor>(actor);
if (!result.IsValid)
{
res.StatusCode = 422;
await res.Negotiate(result.GetFormattedErrors());
return;
}
//Update the user in your database
res.StatusCode = 204;
});
app.MapPost("/actors", async (HttpContext ctx, Actor actor) =>
{
var result = ctx.Request.Validate<Actor>(actor);
if (!result.IsValid)
{
ctx.Response.StatusCode = 422;
await ctx.Response.Negotiate(result.GetFormattedErrors());
return;
}
Note the API example at /actors/download that shows how to return a file like an MP4. Nice and simple. This sample also includes thoughtful validation code with FluentValidation extension methods like ctx.Request.Validate().
Carter is opinionated but surprisingly flexible. You can use two different routing APIs, or clean and performant Endpoint routing:
this.Get("/", (req, res) => res.WriteAsync("There's no place like 127.0.0.1")).RequireAuthorization();
It even supports OpenAPI out of the box! Carter has an active Slack as well as Templates you can add to make your next File | New Project easier!
dotnet new -i CarterTemplate
The following template packages will be installed:
CarterTemplate
Success: CarterTemplate::5.2.0 installed the following templates:
Template Name Short Name Language Tags
--------------- ---------- -------- ------------------------------
Carter Template carter [C#] Carter/Carter Template/NancyFX
There's a lot of great innovation happening in the .NET open source space right now.
Sponsor: YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt
Hey friends! I wanted remind you about my podcast! It's http://hanselminutes.com/ and just a few weeks ago I published my 800th episode! My first episode was in January of 2006 so that's over 15 years of shows. And, if I may be a little boastful for a moment, they are pretty darn good. Maybe the first 400 were a little rough but these last 400 have been ROCK SOLID. Just kidding.
Seriously, though, this 30 minute long tech show has diverse topics and new faces you haven't heard on other podcasts. If you check out over 800 episodes here https://www.hanselminutes.com/episodes you can search by Title, Guest, OR search all the Transcripts! There's over 400 hours of shows and you can search for the topics you want.
Subscribe with your favorite podcast app, the raw RSS is here. We're also available on basically every podcast app out there, including, but not limited to:
If you enjoy the show, the best thing you can do to help me is SPREAD THE WORD! Tell a friend, share and episode or favorite code, but above all GET FOLKS TO SUBSCRIBE.
The world is littered with podcasts that gave up after 9 episodes. There's a ton of average talks shows that ramble on. I've worked really hard - at night, as this is not my day job! - to not only bring you the best guests, but to read their papers, books, and thesis, and ask the questions that YOU would have if you were here with me!
Sometimes I even put the Hanselminutes Podcast on YouTube and the results are truly special and heartbreakingly emotional.
Thanks for listening, and thanks for sharing!
Sponsor: YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt
I've long blogged about my love of setting up a nice terminal, getting the prompt just right, setting my colors, fonts, glyphs, and more. Here's some of my posts.
I want to take a moment to update my pretty prompt post with a little more detail and a more complex PowerShell $PROFILE, due to some changes in Oh My Posh, PowerShell, and the Windows Terminal. I doubt that this post is perfect and I'm sure there's stuff here that is a little extra. But I like it, and this post will serve as my "setting up a new machine" post until I get around to writing a script to do all this for me in one line.
Get Windows Terminal if you don't already have it, you can get Windows Terminal free from the Store. If you don't have access to the Microsoft Store, the builds are published on the GitHub releases page. It comes with a lovely font called Cascadia Code...but...
Now that you have Windows Terminal, you'll notice that it knows that you have PowerShell installed and will add it to your Windows Terminal dropdown menu! You can set PowerShell as your default Profile - that's the one you'll get by default when you make a new Tab - in settings:
Upgrade your Terminal/Console Fonts
I like fonts with lots of Glyphs so I also download and Install Caskaydia Cove Nerd Font Complete. This is the same Cascadia Code font but MODIFIED to include hundreds of special characters that you can use to make your prompt cooler.
IMPORTANT NOTE: The string literal name of this font for use in settings or VS Code is "CaskaydiaCove NF". If you're using Cascadia Code, there are different strings for each. The NUMBER ONE question I get is 'why don't my glyphs/fonts show up right in Windows Terminal/VS Code?' and the answer is almost always "you're using the wrong font string." It's usually either an extra space or a missing space, so don't be afraid to double check.
Remember that Windows Terminal has a lovely Settings UI but you can always click "open JSON file" to manage the settings.json as text if you prefer. Here's mine. Yours will be different and you should customize it! The Windows Terminal documentation is fantastic. Again, see how PowerShell is in BOLD? That's because it's my default.
Now, let's add a little...spice...
Add "Oh My Posh" to your Shell
Oh My Posh has amazing docs so check them out. Do note that some stuff has changed, especially from v2 to v3.
EXCITING NOTE: Oh My Posh is portable and works on any shell, so I use it on both my "Pwsh" (PowerShell) in Windows and my Bash shells on WSL running Ubuntu.
You can install Oh My Posh with with PowerShell's "Install-Module" or with the platform-specific install instructions. I used the latter, which is somewhat new, but it's tomato/tomato, so use what works for you.
Again, read the docs but the idea on Windows is basically this (or get it from GitHub):
winget install JanDeDobbeleer.OhMyPosh
# restart shell to reload PATH
Then edit $PROFILE and add the following line, remembering at this point that oh-my-posh is an executable on the PATH.
I have changed my Oh My Posh from Jan's default to include my own stuff, and I keep my latest up in a GitHub Gist and also in my DropBox/OneDrive so it's always syncing to all my machines. Mine is this, after I download from my gist.
Yours will vary. Again, read the docs and experiment! Once added, reload your profile for the changes to take effect, or restart your shell.
. $PROFILE
That .json file is filled with "segments" that are documented on the Oh My Posh site in a lot of detail. Overwhelming detail. You can add your computer's battery, your Azure Subscription, the dotnet or node version of your current folder, really anything. Even your Spotify songs. I'm going to make one that show my Blood Sugar.
Go explore Oh My Posh Themes and then modify them with your own additional Segments.
Again, note that your fonts will need the right glyphs or it will look weird.
Here's a GOOD prompt:
Here's a BAD prompt with an issue!
Why is it wrong? Either the .json file that is your config has been saved wrong or corrupted the Unicode Glyphs, or you've got a font that doesn't have those glyphs.
Re-assert your Git segment in Oh My Posh
Some folks want full git info, status, added, modified, untracked, etc and others just want the current git branch. Check the Git segment and the Posh Git segment to make sure you are getting the performance AND information you need.
I needed to turn on "display_stash_count" and "display_upstream_icon" in my config json, like this:
Again, this is all optional and affect performance slightly, but be aware of these properties. I believe I have these set the right way I want them in my public gist. Here is me moving around my source code with "z" in stead of cd, but note the prompt changes.
Turn your PowerShell directories up to 11 with Terminal-Icons
Is your prompt not extra enough? That's because your directory listing needs color AND cool icons!
And then add one line to my $profile (edit with "code $profile"):
Import-Module -Name Terminal-Icons
Sweet!
How far is too far?
At this point you're basically done, but I also LOVE PSReadLine. It's great generally but also nice for bash and Emacs types who are moving to PowerShell or use PowerShell for work.
I've added features like "ctrl shift b" at the command line will run "dotnet build." Why? Because I can and because it's muscle memory so I'm making my prompt work for me.
Check out our Sponsor! YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt!
There's two versions of a complete Todo API in this sample, one using Entity Framework Core and one using Dapper for data access. Both are lightweight ORMs (object relational mappers). Let's explore the Dapper example that uses SQLite.
The opening of the code in this example doesn't require a Main() which removes a nice bit of historically unneeded syntactic sodium. The Main is implied.
using System.ComponentModel.DataAnnotations;
using Microsoft.Data.Sqlite;
using Dapper;
var builder = WebApplication.CreateBuilder(args);
var connectionString = builder.Configuration.GetConnectionString("TodoDb") ?? "Data Source=todos.db";
builder.Services.AddScoped(_ => new SqliteConnection(connectionString));
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
At this point we've got a SQLite connection string ready to go scoped in the Services Dependency Injection Container (fancy words for "it's in the pile of stuff we'll be using later") and we've told the system we want a nice UI for our Open API (Swagger) web services description. It's WSDL for JSON, kids!
Then a call to EnsureDb which, ahem, ensures there's a database!
await EnsureDb(app.Services, app.Logger);
What's it look like? Just a little make this table if it doesn't exist action:
using var db = services.CreateScope().ServiceProvider.GetRequiredService<SqliteConnection>();
var sql = $@"CREATE TABLE IF NOT EXISTS Todos (
{nameof(Todo.Id)} INTEGER PRIMARY KEY AUTOINCREMENT,
{nameof(Todo.Title)} TEXT NOT NULL,
{nameof(Todo.IsComplete)} INTEGER DEFAULT 0 NOT NULL CHECK({nameof(Todo.IsComplete)} IN (0, 1))
);";
await db.ExecuteAsync(sql);
}
Next we'll "map" some paths for /error as well as paths for our API's UI so when I hit /swagger with a web browser it looks nice:
if (!app.Environment.IsDevelopment())
{
app.UseExceptionHandler("/error");
}
app.MapGet("/hello", () => new { Hello = "World" })
.WithName("HelloObject");
You can see how /hello would return a JSON object of Hello: "World"
What's that WithName bit at the end? That names the API and corresponds to 'operationId" in the generated swagger/openAPI json file. It's a shorthand for .WithMetadata(new EndpointNameMetadata("get_product")); which was surely no fun at all.
Now let's get some Todos from this database, shall we? Here's all of them and just the complete ones:
app.MapGet("/todos/complete", async (SqliteConnection db) =>
await db.QueryAsync<Todo>("SELECT * FROM Todos WHERE IsComplete = true"))
.WithName("GetCompleteTodos");
Lovely. But what's this Todo object? We haven't seen that. It's just a object that's shaped right. Perhaps one day that could be a record rather than a class but neither Dapper or EFCore support that yet it seems. Still, it's minimal.
public class Todo
{
public int Id { get; set; }
[Required]
public string? Title { get; set; }
public bool IsComplete { get; set; }
}
Let's get a little fancier with an API that gets a Todo but it might not find the result! It may produce an HTTP 200 OK or an HTTP 404 NotFound.
app.MapGet("/todos/{id}", async (int id, SqliteConnection db) =>
await db.QuerySingleOrDefaultAsync<Todo>("SELECT * FROM Todos WHERE Id = @id", new { id })
is Todo todo
? Results.Ok(todo)
: Results.NotFound())
.WithName("GetTodoById")
.Produces<Todo>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound);
Don't be sad if you don't like SQL like this, it's just a choice amongst many. You can use whatever ORM you want, worry not.
A thought: The .Produces are used by the OpenAPI/Swagger system. In my mind, it'd be nice to avoid saying it twice as the Results.Ok and Results.NotFound is sitting right there, but you'd need a Source Generator or aspect-oriented post compilation weaver to tack in on after the fact. This is the only part that I don't like.
Check out our Sponsor! YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt!
David Fowler doesn't have a blog. I think the psychic weight of having a blog would stress him out. Fortunately, David's 'blog' is actually hidden in his prolific GitHub commits and GitHub Gists.
David has been quietly creating an amazing piece of documentation for Minimal APIs in .NET 6. At some point when it's released we'll work with David to get everything promoted to formal documentation, but as far as I'm concerned if he is slapping the keyboard anywhere and it shows up anywhere with a URL then I'm happy with the result!
To start, we see how easy it is to make a .NET 6 (minimal) app to say Hello World over HTTP on localhost:5000/5001
var app = WebApplication.Create(args);
app.MapGet("/", () => "Hello World");
app.Run();
Lovely. It's basically nothing. Can I do more HTTP Verbs? Yes.
app.MapGet("/", () => "This is a GET");
app.MapPost("/", () => "This is a POST");
app.MapPut("/", () => "This is a PUT");
app.MapDelete("/", () => "This is a DELETE");
What about other verbs? More than one?
app.MapMethods("/options-or-head", new [] { "OPTIONS", "HEAD" }, () => "This is an options or head request ");
Lambda expressions, not objects, are our "atoms" that we build molecules with in this world. They are the building blocks.
app.MapGet("/", () => "This is an inline lambda");
var handler = () => "This is a lambda variable";
app.MapGet("/", handler)
But it's just a function, so you can organize things however you want!
var handler = new HelloHandler();
app.MapGet("/", handler.Hello);
class HelloHandler
{
public string Hello()
{
return "Hello World";
}
}
You can capture route parameters as part of the route pattern definition.
app.MapGet("/users/{userId}/books/{bookId}", (int userId, int bookId) => $"The user id is {userId} and book id is {bookId}");
Route constraints are influence the matching behavior of a route. See how this is in order of specificity:
var form = await req.ReadFormAsync();
var file = form.Files["file"];
if (file is null)
{
return Results.BadRequest();
}
var uploads = Path.Combine(uploadsPath, file.FileName);
await using var fileStream = File.OpenWrite(uploads);
await using var uploadStream = file.OpenReadStream();
await uploadStream.CopyToAsync(fileStream);
Sponsor: YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt.
But it's super fun and very easy! Once tests are easy to write, WRITE A LOT OF THEM.
Here's a simple Unit Test of a Web API:
[Fact]
public async Task GetTodos()
{
await using var application = new TodoApplication();
var client = application.CreateClient();
var todos = await client.GetFromJsonAsync<List<Todo>>("/todos");
Assert.Empty(todos);
}
Look how nice that is. Client and Server (Application) are right there, and the HTTP GET is just a function call (as this is a Unit Test, not an integration test that covers end-to-end full stack).
Here's the TodoApplication application factory that creates a Host with a mocked out in memory version of a SQLite database.
class TodoApplication : WebApplicationFactory<Todo>
{
protected override IHost CreateHost(IHostBuilder builder)
{
var root = new InMemoryDatabaseRoot();
builder.ConfigureServices(services =>
{
services.AddScoped(sp =>
{
// Replace SQLite with the in memory provider for tests
return new DbContextOptionsBuilder<TodoDbContext>()
.UseInMemoryDatabase("Tests", root)
.UseApplicationServiceProvider(sp)
.Options;
});
});
return base.CreateHost(builder);
}
}
Nice and clean. You're talking directly to the API, testing just the Unit of Work. No need for HTTP, you're just calling a clean method on the existing API, directly.
That's a simple example, just getting Todos. How would we test making one (POSTing to our Todo application as a Minimal .NET 6 API?)
[Fact]
public async Task PostTodos()
{
await using var application = new TodoApplication();
var client = application.CreateClient();
var response = await client.PostAsJsonAsync("/todos", new Todo { Title = "I want to do this thing tomorrow" });
var todos = await client.GetFromJsonAsync<List<Todo>>("/todos");
Assert.Single(todos);
Assert.Equal("I want to do this thing tomorrow", todos[0].Title);
Assert.False(todos[0].IsComplete);
}
You could abstract the setup away if you wanted to and start with an Server/App and Client ready to go, but it's just two lines.
Here we are asserting that it returned an HTTP 200 - even though the HTTP networking stack isn't involved we are still able to test intent. Then we confirm that we created a Todo and could successfully retrieve it from the (in-memory) database.
Pretty slick!
Sponsor: YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt.