Quantcast
Channel: Scott Hanselman's Blog
Viewing all 1148 articles
Browse latest View live

How to detect if the User's OS prefers dark mode and change your site with CSS and JS

$
0
0

Dark Mode SwitchI got a tweet from Stevö John who said he found my existing light theme for my blog to be jarring as he lives in Dark Mode. I had never really thought about it before, but once he said it, it was obvious. Not only should I support dark mode, but I should detect the user's preference and switch seamlessly. I should also support changing modes if the browser or OS changes as well. Stevö was kind enough to send some sample CSS and a few links so I started to explore the topic.

There's a few things here to consider when using prefers-color-scheme and detecting dark mode:

  • Using the existing theme as much as possible.
    • I don't want to have a style.css and a style-dark.css if I can avoid it. Otherwise it'd be a maintenance nightmare.
  • Make it work on all my sites
  • Consider 3rd party widgets
    • I use a syntax highlighter (very very old) for my blog, and I use a podcast HTML5 player from Simplecast for my podcast. I'd hate to dark mode it all and then have a big old LIGHT MODE podcast player scaring people away. As such, I need the context to flow all the way through.
  • Consider the initial state of the page as well as the stage changing.
    • Sure, I could just have the page look good when you load it and if you change modes (dark to light and back) in the middle of viewing my page, it should also change, right? And also consider all the requirements above.

You can set your Chrome/Edge browser to use System Settings, Light, or Dark. Search for Theme in Settings.

Choose your Theme, Dark or Light

All this, and I can only do it on my lunch hour because this blog isn't my actual day job. Let's go!

The prefers-color-scheme CSS Media Query

I love CSS @media queries and have used them for many years to support mobile and tablet devices. Today they are a staple of responsive design. Turns out you can just use a @media query to see if the user prefers dark mode.

@media (prefers-color-scheme: dark) {

Sweet. Anything inside here (the C in CSS stands for Cascading, remember) will override what comes before. Here's a few starter rules I changed. I was just changing stuff in the F12 tools inspector, and then collecting them back into my main CSS page. You can also use variables if you are an organized CSS person with a design system.

These are just a few, but you get the idea. Note the .line-tan example also where I say 'just put it back to it's initial value.' That's often a lot easier than coming up with "the opposite" value, which in this case would have meant generating some PNGs.

@media (prefers-color-scheme: dark) {

body {
color: #b0b0b0;
background-color: #101010;
}

.containerOuter {
background-color: #000;
color: #b0b0b0;
}

.blogBodyContainer {
background-color: #101010;
}

.line-tan {
background: initial;
}

#mainContent {
background-color: #000;
}
...snip...
}

Sweet. This change to my main css works for the http://hanselman.com main site. Let's do the blog now, which includes the 3rd party syntax highlighter. I use the same basic rules from my main site but then also had to (sorry CSS folks) be aggressive and overly !important with this very old syntax highlighter, like this:

@media (prefers-color-scheme: dark) {

.syntaxhighlighter {
background-color: #000 !important
}

.syntaxhighlighter .line.alt1 {
background-color: #000 !important
}

.syntaxhighlighter .line.alt2 {
background-color: #000 !important
}

.syntaxhighlighter .line {
background-color: #000 !important
}
...snip...
}

Your mileage may vary but it all depends on the tools. I wasn't able to get this working without the !important which I'm told is frowned upon. My apologies.

Detecting Dark Mode preferences with JavaScript

The third party control I use for my podcast is a like a lot of controls, it's an iFrame. As such, it takes some parameters as URL querystring parameters.

I generate the iFrame like this:

<iframe id='simpleCastPlayeriFrame' 

title='Hanselminutes Podcast Player'
frameborder='0' height='200px' scrolling='no'
seamless src='https://player.simplecast.com/{sharingId}'
width='100%'></iframe>

If I add "dark=true" to the querystring, I'll get a different player skin. This is just one example, but it's common that 3rd party integrations will either want a queryString or a variable or custom CSS. You'll want to work with your vendors to make sure they not only care about dark mode (thanks Simplecast!) and that they have a way to easily enable it like this.

Dark Mode and Light Mode Simplecast Podcast player

But this introduce some interesting issues. I need to detect the preference with JavaScript and make sure the right player gets loaded.

I'd also like to notice if the theme changes (light to dark or back) and dynamically change my CSS (that part happens automatically by the browser) and this player (that's gotta be done manually, because dark mode was invoked via a URL querystring segment.)

Here's my code. Again, not a JavaScript expert but this felt natural to me. If it's not super idiomatic or it just sucks, email me and I'll do an update. I do check for window.matchMedia to at least not freak out if an older browser shows up.

if (window.matchMedia) {

var match = window.matchMedia('(prefers-color-scheme: dark)')
toggleDarkMode(match.matches);

match.addEventListener('change', e => {
toggleDarkMode(match.matches);
})

function toggleDarkMode(state) {
let simpleCastPlayer = new URL(document.querySelector("#simpleCastPlayeriFrame").src);
simpleCastPlayer.searchParams.set("dark", state);
document.querySelector("#simpleCastPlayeriFrame").src= simpleCastPlayer.href;
}
}

toggleDarkMode is a method so I can use it for the initial state and the 'change' state. It uses the URL object because parsing strings is so 2000-and-late. I set the searchParams rather than .append because I know it's always set. I set it.

As I write this I supposed I could have stored the document.querySelector() like I did the matchMedia, but I just saw it now. Darn. Still, it works! So I #shipit.

I am sure I missed a page or two or a element or three so if you find a white page or a mistake, file it here https://github.com/shanselman/hanselman.com-bugs/issues and I'll take a look when I can.

All in all, a fun lunch hour. Thanks Stevö for the nudge!

Now YOU, Dear Reader can go update YOUR sites for both Light Mode and Dark Mode.


Sponsor:  The No. 1 reason developers choose Couchbase? You can use your existing SQL++ skills to easily query and access JSON. That’s more power and flexibility with less training. Learn more.



© 2021 Scott Hanselman. All rights reserved.
    

Implicit Usings in .NET 6

$
0
0

Magic".NET 6 introduces implicit namespace support for C# projects. To reduce the amount of using directives boilerplate in .NET C# project templates, namespaces are implicitly included by utilizing the global using feature introduced in C# 10."

NOTE: Did you know that Visual Basic has had this very feature forever?

Remember that C# as a language is itself versioned and in .NET 6 we'll have support for C# 10 features like global usings, which are super cool.

Since we don't want to break existing stuff, there's some things to consider. First, for new projects this is on by default but for existing projects this will be off by default. This offers the best of both worlds.

When you create a new .NET 6 project it will enable this new property:

<ImplicitUsings>enable</ImplicitUsings>

Read more about this breaking change here. This build property builds upon (utilizes) the C# global using feature feature which means any .cs in your project can have a line like:

global using global::SomeNamespace;

The SDK uses a target to autogenerate a .cs file called ImplicitNamespaceImports.cs that will be in your obj folder, but you can - if you desire - have full control and add or remove namespaces to taste.

This gives advanced users who understand target file a huge amount control while still allowing newbies to reap the benefits. Other way to think about it is - if you care, you can control it all. If you don't, it'll just make things easier and cleaner.

Let's look at some code to point out that it's pretty cool. Oleg gives a great example doing some basic threading where there's three lines of code (cool) and three more lines of usings to bring in the namespace support for the actual work (less cool).

using System;

using System.Collections.Generic;
using System.Threading.Tasks;

Console.WriteLine("Hello World");
await Task.Delay(1000);
List<int> _ = new ();

With implicating usings (implicitly bringing in default namespaces) .NET apps with C# 10 can do more out of the box. It's faster to get started because the 90% of the stuff you do all the time is already available and ready to be used!

Maybe this example is too simple? What If you were using a simple Web Worker app? Check out Wade's example.

System

System.Collections.Generic
System.IO
System.Linq
System.Net.Http
System.Threading
System.Threading.Tasks
System.Net.Http.Json
Microsoft.AspNetCore.Builder
Microsoft.AspNetCore.Hosting
Microsoft.AspNetCore.Http
Microsoft.AspNetCore.Routing
Microsoft.Extensions.Configuration
Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.Hosting
Microsoft.Extensions.Logging
Microsoft.Extensions.Configuration
Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.Hosting
Microsoft.Extensions.Logging

This is a lot of boilerplate if you just want a web app. If I'm using the Microsoft.Net.Sdk.Worker SDK in my project file, or just Microsoft.NET.Sdk.Web, I don't have think about or include any of these - they are there implicitly!

You may initially love implicit usings, as I do, or you may find it to be too "magical." I would remind you that most innovations feel magical, especially if they aren't in your face. The Garbage Collector is taken for granted by the majority of .NET developers, while I found it magical when I had spent the previous 10 years managing my own memory down to the byte.

Hope you enjoy this new feature as we get closer to .NET 6's release.


Sponsor:  The No. 1 reason developers choose Couchbase? You can use your existing SQL++ skills to easily query and access JSON. That’s more power and flexibility with less training. Learn more.


© 2021 Scott Hanselman. All rights reserved.
    

The code worked differently when the moon was full

$
0
0

Full Moon

I love a good bug, especially ones that are initially hard to explain but then turn into forehead slapping moments - of course!

There's a bug over on Github called Hysteresis effect on threadpool hill-climbing that is a super interesting read.

Hill climbing is an algorithmic technique when you have a problem (a hill) and then you improve and improve (climb) until you have reached some maximum acceptable solution (reach the top).

Sebastian, the OP of the bug, says there's a "Hysteresis effect" on the threadpool. "Hysteresis is the dependence of the state of a system on its history." Something weird is happening now because something happened before...but what was it?

Sawtooth up and down graphs aren't THAT interesting...but look at the x-axis. This isn't showing minute by minute or even millisecond by millisecond ups and downs like you may have seen before. This x-axis uses months as its unit of measure. Read that again and drink it in.

A sawtooth graph going up and down month to month

Things are cool in February until they are bad for weeks and then cool in March and it moves along a cycle. Not strictly the cycle of the moon but close.

He's seeing the number of cores being used changing from month to month when using PortableThreadPool

We have noticed a periodic pattern on the threadpool hill-climbing logic, which uses either n-cores or n-cores + 20 with an hysteresis effect that switches every 3-4 weeks.

Did you know (I know because I'm old) that Windows 95, for a time, was unable to run longer than 49.7 days of runtime? If you ran it that long it would eventually crash! That's because there's 86M ms in a day, i.e. 1000 * 60 * 60 * 24 = 86,400,000 and 32 bits is 4,294,967,296 so 4,294,967,296 / 86,400,000 = 49.7102696 days!

Kevin in the Github issues notes this as well:

The whole period of square wave sounds an awful lot like it is around 49.7 days. That is how long it takes GetTickCount() to wrap around. On POSIX platforms the platform abstraction layer implements this, and the value returned for that is based not on uptime but on wall clock time, which matches with all machines changing on the same day.

This 49.7 days number is well known as it's how long it take before GetTickCount() overflows/wraps. Kevin goes on to actually give the changeover dates which correspond to the graph!

  • Thu Jan 14 2021
  • Sun Feb 07 2021
  • Thu Mar 04 2021
  • Mon Mar 29 2021
  • Fri Apr 23 2021

He then finds the code in PortableThreadPool.cs that explains the issue:

private bool ShouldAdjustMaxWorkersActive(int currentTimeMs) 

{
// We need to subtract by prior time because Environment.TickCount can wrap around, making a comparison of absolute times unreliable.
int priorTime = Volatile.Read(ref _separated.priorCompletedWorkRequestsTime);
int requiredInterval = _separated.nextCompletedWorkRequestsTime - priorTime;
int elapsedInterval = currentTimeMs - priorTime;
if (elapsedInterval >= requiredInterval)
{
...

He says, and this is all Kevin:

currentTimeMs is Environment.TickCount, which in this case happens to be negative.

The if clause controls if the hill climbing is even run.

_separated.priorCompletedWorkRequestsTime and _separated.nextCompletedWorkRequestsTime start out as zero on process start, and only get updated if the hill climbing code is run.

Therefore, requiredInterval = 0 - 0 and elapsedInterval = negativeNumber - 0. This causes the if statement to become
if (negativeNumber - 0 >= 0 - 0) which returns false, so the hill climbing code is not run, and therefore the variables never get updated, and remain zero. The native version of the thread pool code does all math with unsigned numbers which would avoid such a bug, and it's equivalent part is not even quite the same math in the first place.

The easy fix here is probably to use unsigned arithmetic, but alternatively having the two fields get initialized to Environment.TickCount probably also work

Back to me. Fabulous. The fix is to case the results to unsigned ints via (uint).

Before:

int requiredInterval = _separated.nextCompletedWorkRequestsTime - priorTime;

int elapsedInterval = currentTimeMs - priorTime;

After:

uint requiredInterval = (uint)(_separated.nextCompletedWorkRequestsTime - priorTime);

uint elapsedInterval = (uint)(currentTimeMs - priorTime);

What an interesting and insidious bug! Bugs based on a time calculations can often show themselves later when view through a longer lens and scope of time...sometimes WAY longer than you'd expect.


Sponsor: YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt.



© 2021 Scott Hanselman. All rights reserved.
    

How to make Shared Google Calendars show up on your iPhone and iPad Calendar

$
0
0

My niece just started her MBA at a local university and that uni is a G Suite/Google/Gmail user. Her professors share their class calendars (vs inviting the students to events) so everything is a "Shared with You" shared calendar. That means the events aren't on your primary Google Calendar, they are read-only shares to you.

My niece uses an iPhone and wanted to calendars to sync with her iPhone calendar she already uses. Google help and everyone else says "install Google Calendar." Sure, that works and she can see the calendars in that other apps, but again, it's totally not integrated with her life and existing Calendar App on iPhone, Mac, and iPad.

Turns out there is a 12 year old page deep in Google Calendar at https://www.google.com/calendar/syncselect that you can visit to "reshare" those shared calendars to external users like iOS. Love that Copyright 2009 action and the ongoing dedication to improvement in Google Suite and Calendar for the real features that people need /s.

Google Calendar Sync Select

Make sure you are signed into the right Google Account before you click that link. At this point, return to your iPhone/iPad Calendar app and tap Calendars at the bottom. Check the ones you want to see, and press done. Wait a few minutes and your Google Shared Calendars will start to sync to your iOS Calendar!

Hope this helps.


Sponsor: YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt.



© 2021 Scott Hanselman. All rights reserved.
    

Differences between Hashtable vs Dictonary vs ConcurrentDictionary vs ImmutableDictionary

$
0
0

DictionariesI'm very much enjoying David Fowler's tweets, and since he doesn't have a blog, I will continue to share and expand on his wisdom so that it might reach a larger audience.

He had a conversation with Stephen Toub where Stephen points out that ".NET has 4 built-in dictionary/map types [and] there’s no guidance on when to use what, mostly individual documentation on each implementation."

  • Hashtable
  • Dictionary
  • ConcurrentDictionary
  • ImmutableDictionary

There is actually some good documentation on C# Collections and Data Structures here that we can compare and combine with Stephen Toub's good advice (via David) as well!

There are two main types of collections; generic collections and non-generic collections. Generic collections are type-safe at compile time. Because of this, generic collections typically offer better performance.

Definitely important to remember. Generics have been around since .NET Framework 2.0 around 15 years ago so this is a good reason to consider avoiding Hashtable and using Dictionary<> instead. Hashtable is weakly typed and while it allows you to have keys that map to different kinds of objects which may seem attractive at first, you'll need to "box" the objects up and boxing and unboxing is expensive. You'll almost always want to use Dictionary instead.

If you're accessing your collection across threads, consider the System.Collections.Concurrent namespace or using System.Collections.Immutable which is thread-safe because you'll always be working on a copy as the original collection is immutable (not modifiable).

David says this about

  • ConcurrentDictionary - "Good read speed even in the face of concurrency, but it’s a heavyweight object to create and slower to update."

Or perhaps

  • Dictionary with lock - "Poor read speed, lightweight to create and medium update speed."
  • Dictionary as immutable object - "best read speed and lightweight to create but heavy update. Copy and modify on mutation e.g. new Dictionary(old).Add(key, value)"
  • Hashtable - "Good read speed (no lock required), same-ish weight as dictionary but more expensive to mutate and no generics!"
  • ImmutableDictionary - "Poorish read speed, no locking required but more allocations require to update than a dictionary."

Another one that isn't often used but I'll add as it's good to know about is

  • KeyedCollection - Generic and ordered. Uses Dictionary and List underneath

This is great advice from David:

Use the most obvious one until it bites you. Most software engineering is like this.

Measure and test, measure and test. Good luck to you!


Sponsor: Make login Auth0’s problem. Not yours. Provide the convenient login features your customers want, like social login, multi-factor authentication, single sign-on, passwordless, and more. Get started for free.



© 2021 Scott Hanselman. All rights reserved.
    

ASP.NET Core Diagnostic Scenarios

$
0
0

ThreadingDavid and friends has a great repository filled with examples of "broken patterns" in ASP.NET Core applications. It's a fantastic learning resource with both markdown and code that covers a number of common areas when writing scalable services in ASP.NET Core. Some of the guidance is general purpose but is explained through the lens of writing web services.

Here's a few great DON'T and DO examples, but be sure to Star the repo and check it out for yourself! This is somewhat advanced stuff but if you are doing high output low latency web services AT SCALE these tips will make a huge difference when you're doing a something a hundred thousand time a second!

DON'T - This example uses the legacy WebClient to make a synchronous HTTP request.

public string DoSomethingAsync()

{
var client = new WebClient();
return client.DownloadString(http://www.google.com);
}

DO - This example uses an HttpClient to asynchronously make an HTTP request.

static readonly HttpClient client = new HttpClient();


public Task<string> DoSomethingAsync()
{
return client.GetStringAsync("http://www.google.com");
}

Here's a list of ASP.NET Core Guidance. This one is fascinating. ASP.NET Core doesn't buffer responses which allows it to be VERY scalable. Massively so. As such you do need to be aware that things need to happen in a certain order - Headers come before Body, etc so you want to avoid adding headers after the HttpResponse has started.

DON'T - Add headers once you've started sending the body.

app.Use(async (next, context) =>

{
await context.Response.WriteAsync("Hello ");

await next();

// This may fail if next() already wrote to the response
context.Response.Headers["test"] = "value";
});

DO - Either check if it's started before you send the headers:

app.Use(async (next, context) =>

{
await context.Response.WriteAsync("Hello ");

await next();

// Check if the response has already started before adding header and writing
if (!context.Response.HasStarted)
{
context.Response.Headers["test"] = "value";
}
});

Or even BETTER, add the headers on the OnStarting call back to guarantee they are getting set.

app.Use(async (next, context) =>

{
// Wire up the callback that will fire just before the response headers are sent to the client.
context.Response.OnStarting(() =>
{
context.Response.Headers["test"] = "value";
return Task.CompletedTask;
});

await next();
});

There's a ton of great guidance around async programming. If you are returning something small or trivial, like a simple value, DON'T Task<>:

public class MyLibrary

{
public Task<int> AddAsync(int a, int b)
{
return Task.Run(() => a + b);
}
}

DO use ValueTask<> as this example not only doesn't use an extra threads and avoids heap allocation entirely:

public class MyLibrary

{
public ValueTask<int> AddAsync(int a, int b)
{
return new ValueTask<int>(a + b);
}
}

There's a ton of good learning over there so go check it out! https://github.com/davidfowl/AspNetCoreDiagnosticScenarios


Sponsor: Make login Auth0’s problem. Not yours. Provide the convenient login features your customers want, like social login, multi-factor authentication, single sign-on, passwordless, and more. Get started for free.



© 2021 Scott Hanselman. All rights reserved.
    

Shrink your WSL2 Virtual Disks and Docker Images and Reclaim Disk Space

$
0
0

Docker Desktop for Windows uses SSL to manage all your images and container files and keeps them in a private virtual hard drive (VHDX) called ext4.vhdx.

It's usually in C:\Users\YOURNAME\AppData\Local\Docker\wsl\data and you can often reclaim some of the space if you've cleaned up (pruned your images, etc) with Optimize-Vhd under an administrator PowerShell shell/prompt.

You'll need to stop Docker Desktop by right clicking on its tray icon and choosing Quit Docker Desktop. Once it's stopped, you'll want to stop all running WSL2 instances with wsl --shutdown

Mine was 47gigs as I use Docker A LOT so when I optimize it from admin PowerShell from the wsl\data folder

optimize-vhd -Path .\ext4.vhdx -Mode full

...it is now 2 gigs smaller. That's nice, but it's not a massive improvement. I can run docker images and see that many are out of date or old. If I'm not using Kubernetes I can turn it off and delete those containers as well from the Docker settings UI.

I'll run docker system prune -a to AGRESSIVELY tidy up. Read about these commands before your try yourself. -a means all unused images, not just dangling ones. Don't delete anything you love or care about. If you're worried, docker system is safer without the -a.

Now my Docker WSL 2 VHD is 15 gigs smaller! Learn more about WSL, Windows 11, and WSLg on my latest YouTube!

NOTE: You can now get WSL from the Windows Store! Go get it here and then run "wsl --install" and your command line.

If you want, you can also go find your Ubuntu and other WSL disks and Compact them as well. I only think about this once or twice a year, so don't consider this a major cleanup thing unless you're really tight on space.

Ubuntu WSL disks will be in folders with names like

C:\Users\scott\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState

or

C:\Users\scott\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu20.04onWindows_79rhkp1fndgsc\LocalState

But you will want to look around for yours. Again, back things up and make sure WSL is shutdown first!

Enjoy! REMEMBER - Be sure to back things up before you run commands as admin from some random person's blog. Have a plan.


Sponsor: YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt!



© 2021 Scott Hanselman. All rights reserved.
    

Dotnet could not execute because the application was not found or a compatible .NET SDK is not installed

$
0
0

I ran into this interesting issue where my System PATH environment variables got out of order. I ran "dotnet --version" and saw an error I'd not seen before. Dotnet "Could not execute because the application was not found or a compatible .NET SDK is not installed." What's that?

How did I diagnose this?

dotnet can't run because the application was not found

From the command prompt, I typed "where dotnet" to ask cmd.exe "which dotnet.exe are you offering me and in what order?" You would use the which command in Linux, the where command in DOS, and the more explicit where.exe dotnet on PowerShell.

Here I can see that Program Files (x86) has a dotnet.exe that is FIRST in the Path before the x64 version I expected.

Digging into GitHub I can see that the bug has been fixed but it's good to know how things get into a weird state and how easy it is to fix. In this case, I just swapped (or removed) the x86 and x64 (native architecture) paths in my System PATH via the environment variables UI in Windows. Just type Start and Environment Variables, click it, and then Double Click (or Edit) the PATH variable for a nice UI experience. You can even "move up and move down" within that UI - far nicer than editing a text file.

This can happen when you install a 32-bit .NET SDK on a 64-bit system and the last one in wins. In my case, I don't need a 32-bit host at all in my PATH so I ended up just removing it from my PATH completely.

Hope this helps you if you hit it and glad it's fixed.


Sponsor: YugabyteDB is a distributed SQL database designed for resilience and scale. It is 100% open source, PostgreSQL-compatible, enterprise-grade, and runs across all clouds. Sign up and get a free t-shirt!



© 2021 Scott Hanselman. All rights reserved.
    

Space Cadet Pinball for Windows 95 recompiled for Linux running on Windows 11 as a Linux app under WSLg

$
0
0

Award for longest blog post title ever? Andrey Muzychenko has a great github repository where they decompiled the 25 year old Space Cadet Pinball application from Windows 95/XP and then recompiled it for Linux (and really any platform now that it's portable code!).

NOTE: Because this is a decompilation/recompilation, it doesn't include the original data files. You'll need those from a Windows XP disk or ISO that you'll need to find yourself.

I recently did a YouTube where I showed that Windows 11 runs Graphical Linux Apps out of the box with WSLg.

Here, they've taken a Windows 95 32-bit app and decompiled it from the original EXE, done some nice cleanup, and now it can be recompiled to other targets like Linux.

So, could I go Windows 95 -> Linux -> Windows 11 -> WSL -> WSLg and run this new native Linux executable again on Windows?

If you don't think this is cool, that's a bummer. It's an example of how powerful (and fun) virtualization has become on modern systems!

Pinball under Linux under Windows 11

I just launched WSL (Ubuntu) and installed a few things to compile the code:

sudo apt-get install libsdl2-image-dev

sudo apt-get install libsdl2-mixer-dev
sudo apt install gcc clang build-essential cmake

Then I cloned the repo under WSL and built. It builds into bin and creates a Linux executable.

NOTE: Place compiled executable into a folder containing original game resources (not included).

I am a digital hoarder so I have digital copies of basically everything I've worked on for the last 30 years. I happened to have a Windows XP virtual disk drive from a VM from years ago that was saved on my Synology.

Virtual hard drive from Windows XP

I was able to open it and get all the original resources and wav files.

Pinball resources

Then I copy all the original resources minus the .exe and then run the newly built Linux version...and it magically pops out and runs on Windows...as a graphical Linux app.

3D Pinball for Windows

Amazing! Have fun!


Sponsor: Make login Auth0’s problem. Not yours. Provide the convenient login features your customers want, like social login, multi-factor authentication, single sign-on, passwordless, and more. Get started for free.



© 2021 Scott Hanselman. All rights reserved.
    

Parallel.ForEachAsync in .NET 6

$
0
0

Great tweet from Oleg Kyrylchuk (follow him!) showing how cool Parallel.ForEachAsync is in .NET 6. It's new! Let's look at this clean bit of code in .NET 6 that calls the public GitHub API and retrieves n number of names and bios, given a list of GitHub users:

using System.Net.Http.Headers;

using System.Net.Http.Json;

var userHandlers = new []
{
"users/okyrylchuk",
"users/shanselman",
"users/jaredpar",
"users/davidfowl"
};

using HttpClient client = new()
{
BaseAddress = new Uri("https://api.github.com"),
};
client.DefaultRequestHeaders.UserAgent.Add(new ProductInfoHeaderValue("DotNet", "6"));

ParallelOptions parallelOptions = new()
{
MaxDegreeOfParallelism = 3
};

await Parallel.ForEachAsync(userHandlers, parallelOptions, async (uri, token) =>
{
var user = await client.GetFromJsonAsync<GitHubUser>(uri, token);

Console.WriteLine($"Name: {user.Name}\nBio: {user.Bio}\n");
});

public class GitHubUser
{
public string Name { get; set; }
public string Bio { get; set; }
}

Let's note a few things in this sample Oleg shared. First, there's no Main() as that's not required (but you can have it if you want).

We also see just two usings, bringing other namespaces into scope. Here's what it would look like with explicit namespaces:

using System;

using System.Net.Http;
using System.Net.Http.Headers;
using System.Net.Http.Json;
using System.Threading.Tasks;

We've got an array of users to look up in userHandlers. We prep an HttpClient and setup some ParallelOptions, giving our future ForEach the OK to "fan out" to up to three degrees of parallelism - that's the max number of concurrent tasks we will enable in one call. If it's -1 there is no limit to the number of concurrently running operations.

The really good stuff is here. Tight and clean:

await Parallel.ForEachAsync(userHandlers, parallelOptions, async (uri, token) =>

{
var user = await client.GetFromJsonAsync<GitHubUser>(uri, token);

Console.WriteLine($"Name: {user.Name}\nBio: {user.Bio}");
});

"Take this array and naively fan out into parallel tasks and make a bunch of HTTP calls. You'll be getting JSON back that is shaped like the GitHubUser."

We could make it even syntactically shorter if we used a record vs a class with this syntax:

public record GitHubUser (string Name, string Bio);

This makes "naïve" parallelism really easy. By naïve we mean "without inter-dependencies." If you want to do something and you need to "fan out" this is super easy and clean.


Sponsor: Make login Auth0’s problem. Not yours. Provide the convenient login features your customers want, like social login, multi-factor authentication, single sign-on, passwordless, and more. Get started for free.



© 2021 Scott Hanselman. All rights reserved.
    

How to fix Base System Device Driver issue in Windows 10 and Windows 11

$
0
0

I had a hard drive die recently so I decided to do a full fresh reinstall of Windows, this time a fresh Windows 11 from a downloaded ISO burned to a USB stick.

Lots of Base System Devices in Device Manager

It was a solid install and everything worked out of the box. I used it for a few days and had no issues, but while poking around I noticed in the Device Manager that there were dozens of Base System Devices that were banged out. Like a TON.

I'd like to get that fixed, so I went to Windows Update but WU said everything was cool.

Since a lot of stuff moved and was redesigned in Windows 11 I went looking for "Windows Update Optional Updates" and it took me a while to find it, even though it's listed right there on Windows Update in Settings.

Windows Update optional updates in Windows 11

Click on Advanced Options. Here you can control things like your Windows Update active hours so it doesn't reboot when you don't want it, etc.

Here you'll see a bunch of Optional Updates. I had like 33 of them.

Lots of Optional Updates in Windows 11

Here's what it looks like when you have a bunch of updates pending. These are Chipset and Motherboard updates.

Optional Updates in Windows Update

I did have to select each of these checkboxes and select Install at the end, but once they were done, I had no banged out (yellow exclamation point) devices in Device Manager.

Hope this helps!


Sponsor: Lob APIs ensure your addresses are deliverable and everything you send arrives at the right place. Add address autocompletion and verification in minutes using React, Vue or Javascript - Try for free!



© 2021 Scott Hanselman. All rights reserved.
    

How to set the default user for a WSL distro that has been manually installed with wsl --import

$
0
0

I've blogged before on how to easily move WSL distributions between Windows 10 machines with import and export. I recently did a full fresh install of Windows 11 and wanted to bring my existing highly customized Ubuntu installation along with me.

You can tar up (zip up) the user-mode parts of your WSL2 distributions like this:

wsl --export Ubuntu-20.04 c:\Temp\UbuntuBackup.tar

The part after --export is the distribution name that you can see from running wsl --list -v. The last argument is a full path and filename for the archive you want created.

Next, on the machine you've moved to, you'll do the reverse. Notice that I've changed the Distro name here, and you can if you want. Remember also that you can have as many Linux Distros installed as you want.

wsl --import Ubuntu c:\Linux c:\Temp\UbuntuBackup.tar

The Linux file system is stored in a VHDX (virtual hard drive), usually deep in AppData/Local/YadaYada, but this import is an opportunity for me to store it in C:\Linux which will also make it easier to do maintenance on like Compact-VHD which shrinks your WSL2 disks.

Here's the weird part. When you import a WSL2 distro manually, running that distro on the new machine will end up logging you in as root. It's forgotten that I'm "scott."

There's a lot of ways to fix this that involve the registry or passing in arguments to wsl, but I just want it to work when I run "wsl" or "wsl -d distroname."

Run your distro, and then edit /etc/wsl.conf and add a [user] section like this:

[user]

default=scott

This is the ideal way to set your WSL distro's default user for imported tars because it's stored inside the Linux file system and the setting will stick around when you export/import later on.

Linux on Windows

Hope this helps!


Sponsor: Lob APIs ensure your addresses are deliverable and everything you send arrives at the right place. Add address autocompletion and verification in minutes using React, Vue or Javascript - Try for free!



© 2021 Scott Hanselman. All rights reserved.
    

WSL2 can now mount Linux ext4 disks directly

$
0
0

If you're on a version of Windows 11 that is build 22000 or greater, you can now use WSL to mount Linux disks directly. Run winver to see your Windows version. I'm on 22000.282 as of the time of this writing.

I can also run wsl --help and see the --mount instructions. If you don't have them, you're not on the latest, or you can try installing/update WSL from the Windows Store. Installing WSL from the Windows Store gets you updates faster.

--mount <Disk>

Attaches and mounts a physical disk in all WSL2 distributions.
Options:
--bare
Attach the disk to WSL2, but don't mount it.

--type <Type>
Filesystem to use when mounting a disk, if not specified defaults to ext4.

--options <Options>
Additional mount options.

--partition <Index>
Index of the partition to mount, if not specified defaults to the whole disk.

--unmount [Disk]
Unmounts and detaches a disk from all WSL2 distributions.
Unmounts and detaches all disks if called without argument.

You'll need to be an admin to mount a disk. You can first get a list of all the disks using this PowerShell query:

GET-CimInstance -query "SELECT * from Win32_DiskDrive"

The DeviceID is a path like \\.\SOMETHING and that's what matters.

Then you just wsl --mount \\.\SOMETHING".

The device will appear under /mnt/wsl/SOMETHING in your Linux instance. You can mount unpartitioned disks like this, or you can mount partitioned disks. Then you can run lsblk and see the partitions and they'll be under /dev/<Device><Partition>. Once you know the partition number you can go back and wsl --mount --\\.\SOMETHING --partition --type Filesystem. The filesystem parameter is for things like vfat, etc for filesystems that have kernel support.

WSL --mount for ext4 and Linux File Systems on WSL2

Today SD Cards and Flash Drives aren't working, but USB externals work and internal drives work, as well as VHDs.


Sponsor: Couchbase Capella DBaaS is flexible, full-featured and fully managed  with built-in access via K/V, SQL and full text search. It’s blazing fast, yet surprisingly affordable. Try Capella today for free.



© 2021 Scott Hanselman. All rights reserved.
    

PowerShell 7.2.0 - Could not load type System.Management.Automation.Subsystem.PredictionResult

$
0
0

My PowerShell upgraded to the new PowerShell 7.2.0 and it happened automatically since I get PowerShell from the Windows Store. However, my fancy prompt use PSReadLine with Predictive Autocomplete stopped working suddenly.

However, suddenly I started getting this error on every prompt.

Could not load type 'System.Management.Automation.Subsystem.PredictionResult' from assembly 'Microsoft.PowerShell.PSReadLine.Polyfiller, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'.

at Microsoft.PowerShell.PSConsoleReadLine.PredictionViewBase.Reset()
at Microsoft.PowerShell.PSConsoleReadLine.PredictionInlineView.Reset()
at Microsoft.PowerShell.PSConsoleReadLine.Prediction.Reset()
at Microsoft.PowerShell.PSConsoleReadLine.Initialize(Runspace runspace, EngineIntrinsics engineIntrinsics)
at Microsoft.PowerShell.PSConsoleReadLine.ReadLine(Runspace runspace, EngineIntrinsics engineIntrinsics, CancellationToken cancellationToken)

Well, you can see I'm using a Beta of PSReadLine 2.2:

### Environment

PSReadLine: 2.2.0-beta2
PowerShell: 7.2.0

But I have failed to keep it up to date, and when I got into this state, I realized just because my prompt wasn't pretty (momentarily) I could update it with one line while still staying on the Beta Train.

Install-Module PSReadLine -AllowPrerelease -Force

Now I'm on 2.2.0-beta4 and all is well and I have my cool prediction history back!


Sponsor: Couchbase Capella DBaaS is flexible, full-featured and fully managed  with built-in access via K/V, SQL and full text search. It’s blazing fast, yet surprisingly affordable. Try Capella today for free.



© 2021 Scott Hanselman. All rights reserved.
    

Let's upgrade my main site and podcast to .NET 6 LTS

$
0
0

.NET 6 is released and it's a LTS release which means it'll be fully and actively supported for the next 3 years. If you've been paused waiting for the right time to upgrade to .NET 6, it's a good time to make the move!

Right now, Hanselman.com and Hanselminutes.com (my podcast) are running on some version of .NET 5. You can se by visiting them and scrolling to the very bottom in the footer as I've added a git commit hash and Azure DevOps Build Number and Build ID to an ASP.NET website and I'm using RuntimeInformation.FrameworkDescription to output the plain text version of .NET I'm using. This blog is on .NET Core 3.0 which is an LTS release but I'll be working with Mark Downie this week to move it to .NET 6 LTS as he's already got his instance of dasBlog running on 6!

© Copyright 2021, Scott Hanselman. Design by @jzy, Powered by .NET 5.0.10 and deployed from commit e5058e via build 20210920.3

OK, let's see what's involved. Let's start with my podcast site. I've got the code on GitHub and running locally with "dotnet run" on the command line in both Linux and Windows. I can run the "dotnet upgrade assistant" which is great, but I also like to drive stick shift sometimes for smaller projects.

I'll update my TargetFramework in my csproj project file from net5.0 to net6.0 and update the major PackageReferences from 5.0.0 to 6.0.0. It compiles.

Optionally, I'll also run "dotnet outdated" which is one of my favorite tools. You'll want to make sure you have a solid test suite and not just do this without testing.

dotnet outdated tells me which packages need updating

I see that some of these are major changes so I can do a diff of these packages with a number of tools, but my favorite is FuGet.org (Thanks Frank!) so I can do a diff between the alpha version of Selenium I'm using and the released see that the RemoteLogs type is now called Logs.

I will also update my Dockerfile and change versions like this

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build

and

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS runtime

I'll confirm that these images build and test. I also run my tests optionally inside a container so that's nice.

Some of my sites use Azure DevOps and others use GitHub Actions. Both use YAML (yay) to manage their config, so I'll update my UseDotNet task in Azure DevOps YAML to version: "6.0.x"

I'll commit and start building in the cloud!

Changes to be committed:

(use "git restore --staged <file>..." to unstage)
modified: Dockerfile
modified: azure-pipelines.yml
modified: hanselminutes.core.tests/SeleniumTests.cs
modified: hanselminutes.core.tests/hanselminutes.core.tests.csproj
modified: hanselminutes.core/Startup.cs
modified: hanselminutes.core/hanselminutes-core.csproj

$ git commit -m "upgrade to .net 6"

I will also confirm that my Azure App Service is set to .NET 6, but this is only needed if I'm NOT running in a Docker Container or if I'm NOT using a self-contained executable.

Now I repeat this for my podcast and main site and I'm now on .NET 6! The blog (a larger upgrade) is next.


Sponsor: Lob’s developer-friendly APIs make it easy to send a letter, check, or postcard, as easily as email. Design dynamic HTML templates to personalize mail for maximum impact. Start Exploring with Postman!



© 2021 Scott Hanselman. All rights reserved.
    

DotNetConf 2021 - .NET Everywhere - Windows, Linux, and Beyond

$
0
0

.NET 6 is released and it's a LTS release which means it'll be fully and actively supported for the next 3 years. If you've been paused waiting for the right time to upgrade to .NET 6, it's a good time to make the move!

The .NET Upgrade Assistant can take Windows Forms, WPF, ASP.NET MVC, Console Apps, and Libraries and help you - interactively - upgrade them to .NET 6.

Why bother?

  • Massive and ongoing performance improvements
  • No need to count on .NET being on the user's machine. You can ship you own version of .NET and embed it inside your EXE! Check out Single File Deployment.
  • Tons of new C# 10 features, but they are optional, so your existing code works great but you can also "refactor via subtraction" and check out things like implicit usings.
  • Optional Profile-guided optimization (PGO) is where the JIT compiler generates optimized code in terms of the types and code paths that are most frequently used. This can mean even MORE free performance!
  • Crossgen2 can dramatically improve your startup time
  • Support for macOS Arm64 (or "Apple M1 Silicon") and Windows Arm64 operating systems, for both native Arm64 execution and x64 emulation. In addition, the x64 and Arm64 .NET installers now install side by side. For more info, see .NET Support for macOS 11 and Windows 11 for Arm64 and x64.
  • Hot Reload - just make changes and your app changes...even if you're coding in Notepad!
  • And tons more!

Check out my .NET Conf 2022 video where I see how many places I can run .NET! Windows, Linux, Docker, Mac, Raspberry Pi, even a Remarkable 2 eInk tablet. Enjoy!

 

Be sure to watch and enjoy ALL the great .NET Conf 2022 videos up on YouTube today.


Sponsor: Lob’s developer-friendly APIs make it easy to send a letter, check, or postcard, as easily as email. Design dynamic HTML templates to personalize mail for maximum impact. Start Exploring with Postman!



© 2021 Scott Hanselman. All rights reserved.
    

.NET 6 Hot Reload and "Refused to connect to ws: because it violates the Content Security Policy directive" because Web Sockets

$
0
0

If you're excited about Hot Reload like me AND you also want an "A" grade from SecurityHeaders.com (really, go try this now) then you will learn very quickly about Content-Security-Policy headers. You need to spend some time reading and you may end up with a somewhat sophisticated list of allowed things, scripts, stylesheets, etc.

In DasBlog Core (the cross platform blog engine that runs this blog) Mark Downie makes these configurable and uses the NWebSpec ASP.NET Middleware library to add the needed headers.

if (SecurityStyleSources != null && SecurityScriptSources != null && DefaultSources != null)

{
app.UseCsp(options => options
.DefaultSources(s => s.Self()
.CustomSources(DefaultSources)
)
.StyleSources(s => s.Self()
.CustomSources(SecurityStyleSources)
.UnsafeInline()
)
.ScriptSources(s => s.Self()
.CustomSources(SecurityScriptSources)
.UnsafeInline()
.UnsafeEval()
)
);
}

Each of those variables comes out of a config file. Yes, it would be more security if they came out of a vault or were even hard coded.

DasBlog is a pretty large and cool app and we noticed immediately upon Mark upgrading it to .NET 6 that we were unable to use Hot Reload (via dotnet watch or from VS 2022). We can complain about it, or we can learn about how it works and why it's not working for us!

Remember: Nothing in your computer is hidden from you.

Starting with a simple "View Source" we can see a JavaScript include at the very bottom that is definitely not mine!

<script src="https://www.hanselman.com/_framework/aspnetcore-browser-refresh.js"></script>

Ok, this makes sense as we know not only does HotReload support C# (code behinds) but also Markup via Razor Pages and changing CSS! It would definitely need to communicate "back home" to the runner which is either "dotnet watch" or VS2022.

If I change the ASPNETCORE_ENVIRONMENT to "Production" (either via launch.json, launchsettings, or an environment variable like this, I can see that extra HotReload helper script isn't there:

C:\github\wshotreloadtest>dotnet run --environment="Production"

Building...
info: Microsoft.Hosting.Lifetime[14]
Now listening on: https://localhost:7216
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://localhost:5216

Remember: You never want to use dotnet run in production! It's an SDK building command! You'll want to use dotnet exec your.dll, dotnet your.dll, or best of all, in .NET 6 just call the EXE directly! .\bin\Debug\net6.0\wshotreloadtest.exe in my example. Why? dotnet run will always assume it's in Development (you literally tell it to restore, build, and exec in one run command) if you run it. You'll note that running the actual EXE is always WAY faster as well! Don't ship your .NET SDK to your webserver and don't recompile the whole thing on startup in production!

We can see that that aspnnetcore-browser-refresh.js is the client side of Development-time HotReload. Looking at our browser console we see :

Refused to Connect because it violates a CSP Directive

Refused to connect to 'wss://localhost:62486/' 

because it violates the following Content Security Policy
directive: "default-src 'self'".
Note that 'connect-src' was not explicitly set,
so 'default-src' is used as a fallback.

That's a lot to think about. I started out my ASP.NET Web App's middle ware saying it was OK to talk "back to myself" but nowhere else.

app.UseCsp(options => options.DefaultSources(s => s.Self())); 

Hm, self seems reasonable, why can't the browser connect BACK to the dotnet run'ed Kestrel Web Server? It's all localhost, right? Well, specifically it's http://localhost not ws://localhost, or even wss://localhost (that extra s is for secure) so I need to explicitly allow ws: or wss: or both, but only in Development.

Maybe like this (again, I'm using NWebSpec, but these are just HTTP Headers so you can literally just add them if you want, hardcoded.)

app.UseCsp(options => options.DefaultSources(s => s.Self())

.ConnectSources(s => s.CustomSources("wss://localhost:62895")));

But port numbers change, right? Let's do just wss:, only in Development. Now, if I'm using both CSPs and WebSockets (ws:, wss:) in Production, I'll need to be intentional about this.

What's the moral?

If you start using CSP Headers to tighten things up, be conscious and aware of the headers you need for conveniences like Hot Reload in Development versus whatever things you may need in Production.

Hope this helps save you some time!


Sponsor: At Rocket Mortgage® the work you do around here will be 100% impactful but won’t take all your free time, giving you the perfect work-life balance. Or as we call it, tech/life balance! Learn more.



© 2021 Scott Hanselman. All rights reserved.
    

Upgrading a 20 year old University Project to .NET 6 with dotnet-upgrade-assistant

$
0
0

I wrote a Tiny Virtual Operating System for a 300-level OS class in C# for college back in 2001 (?) and later moved it to VB.NET in 2002. This is all pre-.NET Core, and on early .NET 1.1 or 2.0 on Windows. I moved it to GitHub 5 years ago and ported it to .NET Core 2.0 at the time. At this point it was 15 years old, so it was cool to see this project running on Windows, Linux, in Docker, and on a Raspberry Pi...a machine that didn't exist when the project was originally written.

NOTE: If the timeline is confusing, I had already been working in industry for years at this point but was still plugging away at my 4 year degree at night. It eventually took 11 years to complete my BS in Software Engineering.

This evening, as the children slept, I wanted to see if I could run the .NET Upgrade Assistant on this now 20 year old app and get it running on .NET 6.

Let's start:

$ upgrade-assistant upgrade .\TinyOS.sln

-----------------------------------------------------------------------------------------------------------------
Microsoft .NET Upgrade Assistant v0.3.256001+3c4e05c787f588e940fe73bfa78d7eedfe0190bd

We are interested in your feedback! Please use the following link to open a survey: https://aka.ms/DotNetUASurvey
-----------------------------------------------------------------------------------------------------------------

[22:58:01 INF] Loaded 5 extensions
[22:58:02 INF] Using MSBuild from C:\Program Files\dotnet\sdk\6.0.100\
[22:58:02 INF] Using Visual Studio install from C:\Program Files\Microsoft Visual Studio\2022\Preview [v17]
[22:58:06 INF] Initializing upgrade step Select an entrypoint
[22:58:07 INF] Setting entrypoint to only project in solution: C:\Users\scott\TinyOS\src\TinyOSCore\TinyOSCore.csproj
[22:58:07 INF] Recommending executable TFM net6.0 because the project builds to an executable
[22:58:07 INF] Initializing upgrade step Select project to upgrade
[22:58:07 INF] Recommending executable TFM net6.0 because the project builds to an executable
[22:58:07 INF] Recommending executable TFM net6.0 because the project builds to an executable
[22:58:07 INF] Initializing upgrade step Back up project

See how the process is interactive at the command line, with color prompts and a series of dynamic multiple-choice questions?

Updating .NET project with the upgrade assistant

Interestingly, it builds on the first try, no errors.

When I manually look at the .csproj I can see some weird version numbers, likely from some not-quite-baked version of .NET Core 2 I used many years ago. My spidey sense says this is wrong, and I'm assuming the upgrade assistant didn't understand it.

    <!-- <PackageReference Include="ILLink.Tasks" Version="0.1.4-preview-906439" /> -->

<PackageReference Include="Microsoft.Extensions.Configuration" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="2.0.0-preview2-final" />

I also note a commented-out reference to ILLink.Tasks which was a preview feature in Mono's Linker to reduce the final size of apps and tree-trim them. Some of that functionality is built into .NET 6 now so I'll use that during the build and packaging process later. The reference is not needed today.

I'm gonna blindly upgrade them to .NET 6 and see what happens. I could do this by just changing the numbers and seeing if it restores and builds, but I can also try dotnet outdated which remains a lovely tool in the upgrader's toolkit.

image

This "outdated" tool is nice as it talks to NuGet and confirms that there are newer versions of certain packages.

In my tests - which were just batch files at this early time - I was calling my dotnet app like this:

dotnet netcoreapp2.0/TinyOSCore.dll 512 scott13.txt  

This will change to the modern form with just TinyOSCore.exe 512 scott13.txt with an exe and args and no ceremony.

Publishing and trimming my TinyOS turns into just a 15 meg EXE. Nice considering that the .NET I need is in there with no separate install. I could turn this little synthetic OS into a microservice if I wanted to be totally extra.

dotnet publish -r win-x64 --self-contained -p:PublishSingleFile=true -p:SuppressTrimAnalysisWarnings=true

If I add

-p:EnableCompressionInSingleFile=true

Then it's even smaller. No code changes. Run all my tests, looks good. My project from university from .NET 1.1 is now .NET 6.0, cross platform, self-contained in 11 megs in a single EXE. Sweet.


Sponsor: At Rocket Mortgage® the work you do around here will be 100% impactful but won’t take all your free time, giving you the perfect work-life balance. Or as we call it, tech/life balance! Learn more.



© 2021 Scott Hanselman. All rights reserved.
    

A Nightscout Segment for OhMyPosh shows my realtime Blood Sugar readings in my Git Prompt

$
0
0

I've talked about how I love a nice pretty prompt in my Windows Terminal and made videos showing in detail how to do it. I've also worked with my buddy TooTallNate to put my real-time blood sugar into a bash or PowerShell prompt, but this was back in 2017.

Now that I'm "Team OhMyPosh" I have been meaning to write a Nightscout "segment" for my prompt. Nightscout is an open source self-hosted (there are commercial hosts also like T1Pal) website and API for remote display of real-time and near-real-time glucose readings for Diabetics like myself.

Since my body has an active REST API where I can just do an HTTP GET (via curl or whatever) and see my blood sugar, it clearly belongs in a place of honor, just like my current Git Branch!

My blood sugar in my Prompt!

Oh My Posh supports configurable "segments" and now there's a beta (still needs mmol and stale readings support) Nightscout segment that you can setup in just a few minutes!

This prompt works in ANY shell on ANY os! You can do this in zsh, PowerShell, Bash, whatever makes you happy.

Here is a YouTube of Jan from OhMyPosh and I coding the segment LIVE in Go.

If you have an existing OhMyPosh json config, you can just add another segment like this. Make sure your Nightscout URL includes a secure Token or is public (up to you). Note also that I setup "if/then" rules in my background_templates. These are optional and up to you to change to your taste. I set my background colors to red, yellow, green depending on sugar numbers. I also have a foreground template that is not really used, as you can see it always evaluates to black #000, but it shows you how you could set it to white text on a darker background if you wanted.

{

"type": "nightscout",
"style": "diamond",
"foreground": "#ffffff",
"background": "#ff0000",
"background_templates": [
"{{ if gt .Sgv 150 }}#FFFF00{{ end }}",
"{{ if lt .Sgv 60 }}#FF0000{{ end }}",
"#00FF00"
],
"foreground_templates": [
"{{ if gt .Sgv 150 }}#000000{{ end }}",
"{{ if lt .Sgv 60 }}#000000{{ end }}",
"#000000"
],

"leading_diamond": "",
"trailing_diamond": "\uE0B0",
"properties": {
"url": "https://YOURNIGHTSCOUTAPP.herokuapp.com/api/v1/entries.json?count=1&token=APITOKENFROMYOURADMIN",
"http_timeout": 1500,
"template": " {{.Sgv}}{{.TrendIcon}}"
}
},

By default we will only go out and hit your Nightscout instance every 5 min, only when the prompt is repainted, and we'll only wait 1500ms before giving up. You can set that "http_timeout" (how long before we give up) if you feel this slows you down. It'll be cached for 5 min so it's unlikely  to b something you'll notice. The benefit of this new OhMyPosh segment over the previous solution is that it requires no additional services/chron jobs and can be setup extremely quickly. Note also that you can customize your template with NerdFonts. I've included a tiny syringe!

What a lovely prompt with Blood Sugar!

Next I'll hope to improve the segment with mmol support as well as strikeout style for "stale" (over 15 min old) results. You're also welcome to help out by watching our YouTube and submitting a PR!


Sponsor: Make login Auth0’s problem. Not yours. Provide the convenient login features your customers want, like social login, multi-factor authentication, single sign-on, passwordless, and more. Get started for free.



© 2021 Scott Hanselman. All rights reserved.
    

JavaScript and TypeScript Projects with React, Angular, or Vue in Visual Studio 2022 with or without .NET

$
0
0

I was reading Gabby's blog post about the new TypeScript/JavaScript project experience in Visual Studio 2022. You should read the docs on JavaScript and TypeScript in Visual Studio 2022.

If you're used to ASP.NET apps when you think about apps that are JavaScript heavy, "front end apps" or TypeScript focused, it can be confusing as to "where does .NET fit in?"

You need to consider the responsibilities of your various projects or subsystems and the multiple totally valid ways you can build a web site or web app. Let's consider just a few:

  1. An ASP.NET Web app that renders HTML on the server but uses TS/JS
    • This may have a Web API, Razor Pages, with or without the MVC pattern.
    • You maybe have just added JavaScript via <script> tags
    • Maybe you added a script minimizer/minifier task
    • Can be confusing because it can feel like your app needs to 'build both the client and the server' from one project
  2. A mostly JavaScript/TypeScript frontend app where the HTML could be served from any web server (node, kestrel, static web apps, nginx, etc)
    • This app may use Vue or React or Angular but it's not an "ASP.NET app"
    • It calls backend Web APIs that may be served by ASP.NET, Azure Functions, 3rd party REST APIs, or all of the above
    • This scenario has sometimes been confusing for ASP.NET developers who may get confused about responsibility. Who builds what, where do things end up, how do I build and deploy this?

VS2022 brings JavaScript and TypeScript support into VS with a full JavaScript Language Service based on TS. It provides a TypeScript NuGet Package so you can build your whole app with MSBuild and VS will do the right thing.

NEW: Starting in Visual Studio 2022, there is a new JavaScript/TypeScript project type (.esproj) that allows you to create standalone Angular, React, and Vue projects in Visual Studio.

The .esproj concept is great for folks familiar with Visual Studio as we know that a Solution contains one or more Projects. Visual Studio manages files for a single application in a Project. The project includes source code, resources, and configuration files. In this case we can have a .csproj for a backend Web API and an .esproj that uses a client side template like Angular, React, or Vue.

Thing is, historically when Visual Studio supported Angular, React, or Vue, it's templates were out of date and not updated enough. VS2022 uses the native CLIs for these front ends, solving that problem with Angular CLI, Create React App, and Vue CLI.

If I am in VS and go "File New Project" there are Standalone templates that solve Example 2 above. I'll pick JavaScript React.

Standalone JavaScript Templates in VS2022

Then I'll click "Add integration for Empty ASP.NET Web API. This will give me a frontend with javascript ready to call a ASP.NET Web API backend. I'll follow along here.

Standalone JavaScript React Template

It then uses the React CLI to make the front end, which again, is cool as it's whatever version I want it to be.

React Create CLI

Then I'll add my ASP.NET Web API backend to the same solution, so now I have an esproj and a csproj like this

frontend and backend

Now I have a nice clean two project system - in this case more JavaScript focused than .NET focused. This one uses npm to startup the project using their web development server and proxyMiddleware to proxy localhost:3000 calls over to the ASP.NET Web API project.

Here is a React app served by npm calling over to the Weather service served from Kestrel on ASP.NET.

npm app running in VS 2022 against an ASP.NET Web API

This is inverted than most ASP.NET Folks are used to, and that's OK. This shows me that Visual Studio 2022 can support either development style, use the CLI that is installed for whatever Frontend Framework, and allow me to choose what web server and web browser (via Launch.json) I want.

If you want to flip it, and put ASP.NET Core as the primary and then bring in some TypeScript/JavaScript, follow this tutorial because that's also possible!


Sponsor: Make login Auth0’s problem. Not yours. Provide the convenient login features your customers want, like social login, multi-factor authentication, single sign-on, passwordless, and more. Get started for free.



© 2021 Scott Hanselman. All rights reserved.
    
Viewing all 1148 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>