Quantcast
Channel: Scott Hanselman's Blog
Viewing all 1148 articles
Browse latest View live

Turn your Raspberry Pi into a portable Touchscreen Tablet with SunFounder's RasPad

$
0
0

RasPadI was very fortunate to get a preview version of the "RasPad" from SunFounder. Check it out at https://raspad.sunfounder.com/ and at the time of these writing they have a Kickstarter I'm backing!

I've written a lot about Raspberry Pis and the cool projects you can do with them. My now-10 and 12 year olds love making stuff with Raspberry Pis and we have at least a dozen of them around the house. A few are portable arcades (some quite tiny PiArcades), one runs PiMusicBox and is a streaming radio, and I have a few myself in a Kubernetes Cluster.

I've built Raspberry Pi Cars with SunFounder parts, so they sent me an early evaluation version of their "RasPad." I was familiar with the general idea as I'd tried (and failed) to make something like it with their 10" Touchscreen LCD for Raspberry Pi.

At its heart, the RasPad is quiet elegant and simple. It's a housing for your Raspberry Pi that includes a battery for portable use along with an integrated touchscreen LCD. However, it's the little details where it shines.

RasPad - Raspberry Pi Touchscreen

It's not meant to be an iPad. It's not trying. It's thick on one end, and beveled to an angle. You put your RaspberryPi inside the back corner and it sits nicely on the plastic posts without screws. Power and HDMI and are inside with cables, then it's one button to turn it on. There's an included power supply as well as batteries to run the Pi and screen for a few hours while portable.

RasPad ports are extensive

I've found with my 10 year old that this neat, organized little tablet mode makes the Pi more accessible and interesting to him - as opposed to the usual mess of wires and bare circuit boards we usually have on my workbench. I could see a fleet of RasPads in a classroom environment being far more engaging than just "raw" Pis on a table.

The back of the RasPad has a slot where a GPIO Ribbon cable can come out to a breakout  board:

GPIO slot is convenient

At this point you can do all the same cool hardware projects you can do with a Raspberry Pi, with all the wires, power, touchscreen, ports, and everything nice and sanitary.

The inside hatch is flexible enough for other boards as well:

Raspberry Pi or TinkerBoard

I asked my 10 year old what he wanted to make with the RasPad/Raspberry Pi and he said he wanted to make a "burglar alarm" for his bedroom. Pretty sure he just wants to keep the 12 year old out of his room.

We started with a Logitech 930e USB Webcam we had laying around. The Raspberry PI can use lots of off-the-shelf high-quality web cams without drivers, and the RasPad keeps all the USB ports exposed.

Then we installed the "Motion" Project. It's on GitHub at https://github.com/Motion-Project/motion with:

sudo apt-get install motion

Then edited /etc/motion/motion.conf with the nano editor (easier for kids then vim). You'll want to confirm the height and width. Smaller is easier on the Pi, but you can go big with 1280x720 if you like! We also set the target_dir to /tmp since motion's daemon doesn't have access to ~/.

There's a number of events you can take action on, like "on_motion_detected." We just added a little Python script to let people know WE SEE YOU"

It's also cool to set location_motion_style to "redbox" so you can see WHERE motion was detected in a frame, and be sure to set stream_localhost to "off" so you can hit http://yourraspberrypiname:8081 to see the stream remotely!

When motion is detected, the 10 year old's little Python script launches:

GET OUT OF MY ROOM

And as a bonus, here is the 10 year old trying to sneak into the room. Can you spot him? (The camera did)

IMG_3389

What would you build with a RaspberryPi Tablet?

BTW, there's a Community Build of the .NET Core SDK for Raspberry Pi!


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2018 Scott Hanselman. All rights reserved.
     

Automatic Unit Testing in .NET Core plus Code Coverage in Visual Studio Code

$
0
0

I was talking to Toni Edward Solarin on Skype yesterday about his open source spike (early days) of Code Coverage for .NET Core called "coverlet." There's a few options out there for cobbling together .NET Core Code Coverage but I wanted to see if I could use the lightest tools I could find and make a "complete" solution for Visual Studio Code that would work for .NET Core cross platform. I put my own living spike of a project up on GitHub.

Now, keeping in mind that Toni's project is just getting started and (as of the time of this writing) currently supports line and method coverage, and branch coverage is in progress, this is still a VERY compelling developer experience.

Using VS Code, Coverlet, xUnit, plus these Visual Studio Code extensions

Here's what we came up with.

Auto testing, code coverage, line coloring, test explorers, all in VS Code

There's a lot going on here but take a moment and absorb the screenshot of VS Code above.

  • Our test project is using xunit and the xunit runner that integrates with .NET Core as expected.
    • That means we can just "dotnet test" and it'll build and run tests.
  • Added coverlet, which integrates with MSBuild and automatically runs when you "dotnet test" if you "dotnet test /p:CollectCoverage=true"
    • (I think this should command line switch should be more like --coverage" but there may be an MSBuild limitation here.)

I'm interested in "The Developer's Inner Loop." . That means I want to have my tests open, my code open, and as I'm typing I want the solution to build, run tests, and update code coverage automatically the way Visual Studio proper does auto-testing, but in a more Rube Goldbergian way. We're close with this setup, although it's a little slow.

Coverlet can produce opencover, lcov, or json files as a resulting output file. You can then generate detailed reports from this. There is a language agnostic VS Code Extension called Coverage Gutters that can read in lcov files and others and highlight line gutters with red, yellow, green to show test coverage. Those lcov files look like this, showing file names, file numbers, coverage, and number of exceptions.

SF:C:\github\hanselminutes-core\hanselminutes.core\Constants.cs
DA:3,0
end_of_record
SF:C:\github\hanselminutes-core\hanselminutes.core\MarkdownTagHelper.cs
DA:21,5
DA:23,5
DA:49,5

I should be able to pick the coverage file manually with the extension, but due to a small bug, it's easier to just tell Coverlet to generate a specific file name in a specific format.

dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov.info .\my.tests

The lcov.info files then watched by the VSCode Coverage Gutters extension and updates as the file changes if you click watch in the VS Code Status Bar.

You can take it even further if you add "dotnet watch test" which will compile and re-run tests if code changes:

dotnet watch --project .\my.tests test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov.info 

I can run "WatchTests.cmd" in another terminal, or within the VS Code integrated terminal.

tests automatically running as code changes

NOTE: If you're doing code coverage you'll want to ensure your tests and tested assembly are NOT the same file. You might be able to get it to work but it's easier to keep things separate.

Next, add in the totally under appreciated .NET Core Test Explorer extension (this should have hundreds of thousands of downloads - it's criminal) to get this nice Test Explorer pane:

A Test Explorer tree view in VS Code for NET Core projects

Even better, .NET Test Explorer lights up some "code lens" style interfaces over each test as well as a green checkmark for passing tests. Having "debug test" available for .NET Core is an absolute joy.

Check out "run test" and "debug test"

Finally we make some specific improvements to the .vscode/tasks.json file that drives much of VS Code's experience with our app. The "BUILD" label is standard but note both the custom "test" and "testwithcoverage" labels, as well as the added group with kind: "test."

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": "build",
            "command": "dotnet",
            "type": "process",
            "args": [
                "build",
                "${workspaceFolder}/hanselminutes.core.tests/hanselminutes.core.tests.csproj"
            ],
            "problemMatcher": "$msCompile",
            "group": {
                "kind": "build",
                "isDefault": true
            }
        },
        {
            "label": "test",
            "command": "dotnet",
            "type": "process",
            "args": [
                "test",
                "${workspaceFolder}/hanselminutes.core.tests/hanselminutes.core.tests.csproj"
            ],
            "problemMatcher": "$msCompile",
            "group": {
                "kind": "test",
                "isDefault": true
            }
        },
        {
            "label": "test with coverage",
            "command": "dotnet",
            "type": "process",
            "args": [
                "test",
                "/p:CollectCoverage=true",
                "/p:CoverletOutputFormat=lcov",
                "/p:CoverletOutput=./lcov.info",
                "${workspaceFolder}/hanselminutes.core.tests/hanselminutes.core.tests.csproj"
            ],
            "problemMatcher": "$msCompile",
            "group": {
                "kind": "test",
                "isDefault": true
            }
        },
    ]
}

This lets VS Code know what's for building and what's for testing, so if I use the Command Palette to "Run Test" then I'll get this dropdown that lets me run tests and/or update coverage manually if I don't want the autowatch stuff going.

Test or Test with Coverage

Again, all this is just getting started but I've applied it to my Podcast Site that I'm currently rewriting and the experience is very smooth!

Here's a call to action for you! Toni is just getting started on Coverlet and I'm sure he'd love some help. Head over to the Coverlet github and don't just file issues and complain! This is an opportunity for you to get to know the deep internals of .NET and create something cool for the larger community.

What are your thoughts?


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2018 Scott Hanselman. All rights reserved.
     

Command line "tab" completion for .NET Core CLI in PowerShell or bash

$
0
0

Lots of people are using open source .NET Core and the "dotnet" command line, but few know that the .NET CLI supports command "tab" completion!

You can ensure you have it on .NET Core 2.0 with this test:

C:\Users\scott>  dotnet complete "dotnet add pac"
package

You can see I do, as it proposed "package" as the completion for "pac"

Now, just go into PowerShell and run:

notepad $PROFILE

And add this code to the bottom to register "dotnet complete" as the "argument completer" for the dotnet command.

# PowerShell parameter completion shim for the dotnet CLI 
Register-ArgumentCompleter -Native -CommandName dotnet -ScriptBlock {
    param($commandName, $wordToComplete, $cursorPosition)
        dotnet complete --position $cursorPosition "$wordToComplete" | ForEach-Object {
           [System.Management.Automation.CompletionResult]::new($_, $_, 'ParameterValue', $_)
        }
}

Then just use it! You can do the same not only in PowerShell, but in bash, or zsh as well!

It's super useful for "dotnet add package" because it'll make smart completions like this:

It also works for adding/removing local project preferences as it is project file aware. Go set it up NOW, it'll take you 3 minutes.

RANDOM BUT ALSO USEFUL: "dotnet serve" - A simple command-line HTTP server.

Here's a useful little global tool - dotnet serve. It launches a server in the current working directory and serves all files in it. It's not kestrel, the .NET Application/Web Server. It's just a static webserver for development.

The latest release of dotnet-serve requires the 2.1.300-preview1 .NET Core SDK or newer. Once installed, run this command:

dotnet install tool --global dotnet-serve 

Then whenever I'm in a folder where I want to server something static (CSS, JS, PNGs, whatever) I can just

dotnet serve

It can also optionally open a web browser navigated to that localhost URL.

NOTE: Here's a growing list of .NET Global Tools.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.do



© 2018 Scott Hanselman. All rights reserved.
     

Easier functional and integration testing of ASP.NET Core applications

$
0
0

.NET Test ExplorerIn ASP.NET 2.1 (now in preview) there's apparently a new package called Microsoft.AspNetCore.Mvc.Testing that's meant to help streamline in-memory end-to-end testing of applications that use the MVC pattern. I've been re-writing my podcast site at https://hanselminutes.com in ASP.NET Core 2.1 lately, and recently added some unit testing and automatic unit testing with code coverage. Here's a couple of basic tests. Note that these call the Razor Pages directly and call their OnGet() methods directly. This shows how ASP.NET Core is nicely factored for Unit Testing but it doesn't do a "real" HTTP GET or perform true end-to-end testing.

These tests are testing if visiting URLs like /620 will automatically redirect to the correct full canonical path as they should.

[Fact]

public async void ShowDetailsPageIncompleteTitleUrlTest()
{
// FAKE HTTP GET "/620"
IActionResult result = await pageModel.OnGetAsync(id:620, path:"");

RedirectResult r = Assert.IsType<RedirectResult>(result);
Assert.NotNull(r);
Assert.True(r.Permanent); //HTTP 301?
Assert.Equal("/620/jessica-rose-and-the-worst-advice-ever",r.Url);
}

[Fact]
public async void SuperOldShowTest()
{
// FAKE HTTP GET "/default.aspx?showId=18602"
IActionResult result = await pageModel.OnGetOldShowId(18602);

RedirectResult r = Assert.IsType<RedirectResult>(result);
Assert.NotNull(r);
Assert.True(r.Permanent); //HTTP 301?
Assert.StartsWith("/615/developing-on-not-for-a-nokia-feature",r.Url);
}

I wanted to see how quickly and easily I could do these same two tests, except "from the outside" with an HTTP GET, thereby testing more of the stack.

I added a reference to Microsoft.AspNetCore.Mvc.Testing in my testing assembly using the command-line equivalanet of "Right Click | Add NuGet Package" in Visual Studio. This CLI command does the same thing as the UI and adds the package to the csproj file.

dotnet add package Microsoft.AspNetCore.Mvc.Testing -v 2.1.0-preview1-final

It includes a new WebApplicationTestFixture that I point to my app's Startup class. Note that I can take store the HttpClient the TestFixture makes for me.

public class TestingMvcFunctionalTests : IClassFixture<WebApplicationTestFixture<Startup>>

{
public HttpClient Client { get; }

public TestingMvcFunctionalTests(WebApplicationTestFixture<Startup> fixture)
{
Client = fixture.Client;
}
}

No tests yet, just setup. I'm using SSL redirection so I'll make sure the client knows that, and add a test:

public TestingMvcFunctionalTests(WebApplicationTestFixture<Startup> fixture)

{
Client = fixture.Client;
Client.BaseAddress = new Uri("https://localhost");
}

[Fact]
public async Task GetHomePage()
{
// Arrange & Act
var response = await Client.GetAsync("/");

// Assert
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
}

This will fail, in fact. Because I have an API Key that is needed to call out to my backend system, and I store it in .NET's User Secrets system. My test will get an InternalServerError instead of OK.

Starting test execution, please wait...

[xUnit.net 00:00:01.2110048] Discovering: hanselminutes.core.tests
[xUnit.net 00:00:01.2690390] Discovered: hanselminutes.core.tests
[xUnit.net 00:00:01.2749018] Starting: hanselminutes.core.tests
[xUnit.net 00:00:08.1088832] hanselminutes_core_tests.TestingMvcFunctionalTests.GetHomePage [FAIL]
[xUnit.net 00:00:08.1102884] Assert.Equal() Failure
[xUnit.net 00:00:08.1103719] Expected: OK
[xUnit.net 00:00:08.1104377] Actual: InternalServerError
[xUnit.net 00:00:08.1114432] Stack Trace:
[xUnit.net 00:00:08.1124268] D:\github\hanselminutes-core\hanselminutes.core.tests\FunctionalTests.cs(29,0): at hanselminutes_core_tests.TestingMvcFunctionalTests.<GetHomePage>d__4.MoveNext()
[xUnit.net 00:00:08.1126872] --- End of stack trace from previous location where exception was thrown ---
[xUnit.net 00:00:08.1158250] Finished: hanselminutes.core.tests
Failed hanselminutes_core_tests.TestingMvcFunctionalTests.GetHomePage
Error Message:
Assert.Equal() Failure
Expected: OK
Actual: InternalServerError

Where do these secrets come from? In Development they come from user secrets.

public Startup(IHostingEnvironment env)

{
this.env = env;
var builder = new ConfigurationBuilder();

if (env.IsDevelopment())
{
builder.AddUserSecrets<Startup>();
}
Configuration = builder.Build();
}

But in Production they come from the ENVIRONMENT. Are these tests Development or Production...I must ask myself.  They are Production unless told otherwise. I can override the Fixture and tell it to use another Environment, like "Development." Here is a way (given this preview) to make my own TestFixture by deriving and grabbing and override to change the Environment. I think it's too hard and should be easier.

Either way, the real question here is for me - do I want my tests to be integration tests in development or in "production." Likely I need to make a new environment for myself - "testing."

public class MyOwnTextFixture<TStartup> : WebApplicationTestFixture<Startup> where TStartup : class

{
public MyOwnTextFixture() { }

protected override void ConfigureWebHost(IWebHostBuilder builder)
{
builder.UseEnvironment("Development");
}
}

However, my User Secrets still aren't loading, and that's where the API Key is that I need.

BUG?: There is either a bug here, or I don't know what I'm doing. I'm loading User Secrets in builder.AddUserSecrets<Startup> and later injecting the IConfiguration instance from builder.Build() and going "_apiKey = config["SimpleCastAPIKey"];" but it's null. The config that's injected later in the app isn't the same one that's created in Startup.cs. It's empty. Not sure if this is an ASP.NE Core 2.0 thing or 2.1 thing but I'm going to bring it up with the team and update this blog post later. It might be a Razor Pages subtlety I'm missing.
For now, I'm going to put in a check and manually fix up my Config. However, when this is fixed (or I discover my error) this whole thing will be a pretty nice little set up for integration testing.

I will add another test, similar to the redirect Unit Test but a fuller integration test that actually uses HTTP and tests the result.

[Fact]

public async Task GetAShow()
{
// Arrange & Act
var response = await Client.GetAsync("/620");

// Assert
Assert.Equal(HttpStatusCode.MovedPermanently, response.StatusCode);
Assert.Equal("/620/jessica-rose-and-the-worst-advice-ever",response.Headers.Location.ToString());
}

There's another issue here that I don't understand. Because have to set Client.BaseAddress to https://localhost (because https) and the Client is passed into fixture.Client, I can't set the Base address twice or I'll get an exception, as the Test's Constructor runs twice, but the HttpClient that's passed in as a lifecycler that's longer. It's being reused, and it fails when setting its BaseAddress twice.

Error Message:

System.InvalidOperationException : This instance has already started one or more requests. Properties can only be modified before sending the first request.

BUG? So to work around it I check to see if I've done it before. Which is gross. I want to set the BaseAddress once, but I am not in charge of the creation of this HttpClient as it's passed in by the Fixture.

public TestingMvcFunctionalTests(MyOwnTextFixture<Startup> fixture)

{
Client = fixture.Client;
if (Client.BaseAddress.ToString().StartsWith("https://") == false)
Client.BaseAddress = new Uri("https://localhost");
}

Another option is that I create a new client every time, which is less efficient and perhaps a better idea as it avoids any side effects from other tests, but also feels weird that I should have to do this, as the new standard for ASP.NET Core sites is to be SSL/HTTPS by default..

public TestingMvcFunctionalTests(MyOwnTextFixture<Startup> fixture)

{
Client = fixture.CreateClient(new Uri(https://localhost));
}

I'm still learning about how it all fits together, but later I plan to add in Selenium tests to have a full, complete, test suite that includes the browser, CSS, JavaScript, end-to-end integration tests, and unit tests.

Let me know if you think I'm doing something wrong. This is preview stuff, so it's early days!


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.


© 2018 Scott Hanselman. All rights reserved.
     

Audio Switcher should be built into Windows - Easily Switch Playback and Recording Devices

$
0
0

Audio SwitcherI've been running a podcast now for over 600 episodes and I do most of my recordings here at home using a Peavey PV6 Mixing Console - it's fantastic. However, I also work remotely and use Skype a lot to talk to co-workers. Sometimes I use a USB Headset but I also have a Polycom Work Phone for conference calls. Plus my webcams have microphones, so all this adds up to a lot of audio devices.

Windows 10 improved the switching experience for Playback Devices, but there's no "two click" way to quickly change Recording Devices. A lot of Sound Settings are moving into the Windows 10 Settings App but it's still incomplete and sometimes you'll find yourself looking at the older Sound Dialog:

Sound Control Panel

Enter David Kean's "Audio Switcher." It's nearly 3 years old with source code on GitHub, but it works AMAZINGLY. It's literally what the Power User has always wanted when managing audio on Windows 10.

It adds a Headphone Icon in the Tray, and clicking it on puts the Speakers at the Top and Mics at the Bottom. Right-clicking an item lets you set it as default. Even nicer if you set the icons for your devices like I did.

Audio Switcher

Ok, that's the good news. It's great, and there's Source Code available so you can build it easily with free Visual Studio Community.

Bad news? Today, there's no "release" or ZIP or EXE file for you to download. That said, I uploaded a totally unsupported and totally not my responsibility and you shouldn't trust me compiled version here.

Hopefully after this blog post is up a few days, David will see this blog post and make an installer with a cert and/or put this wonderful utility somewhere, as folks clearly are interested. I'll update this blog post as soon as more people start using Audio Switcher.

Thank you David for making this fantastic utility!


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2018 Scott Hanselman. All rights reserved.
     

Optimizing an ASP.NET Core site with Chrome's Lighthouse Auditor

$
0
0

I'm continuing to update my podcast site. I've upgraded it from ASP.NET "Web Pages" (10 year old code written in WebMatrix) to ASP.NET Core 2.1 developed with VS Code. Here's some recent posts:

I was talking with Ire Aderinokun today for an upcoming podcast episode and she mentioned I should use Lighthouse (It's built into Chrome, can be run as an extension, or run from the command line) to optimize my podcast site. I, frankly, had not looked at that part of Chrome in a long time and was shocked and how powerful it was!

Performance 73, PWA 55, Accessbiolity 68, Best Practices 81, SEO 78

Lighthouse also told me that I was using an old version of jQuery (I know that) that had known security issues (I didn't know that!)

It told me about Accessibility issues as well, pointing out that some of my links were not discernable to a screen reader.

Some of these issues were/are easily fixed in minutes. I think I spent about 20 minutes fixing up some links, compressing a few images, and generally "tidying up" in ways that I knew wouldn't/shouldn't break my site. Those few minutes took my Accessibility and Best Practices score up measurably, but I clearly have some work to do around Performance. I never even considered my Podcast Site as a potential Progressive Web App (PWA) but now that I have a new podcast host and a nice embedded player, that may be a possibility for the future!

Performance 73, PWA 55, Accessbiolity 85, Best Practices 88, SEO 78

My largest issue is with my (aging) CSS. I'd like to convert the site to use FlexBox or a CSS Grid as well as fixed up my Time to First Meaningful Paint.

I went and updated my Archives page a while back with Lazy Image loading, but it was using jQuery and some older (4+ year old) techniques. I'll revisit those with modern techniques AND apply them to the grid of 16 shows on the site's home page as well.

There are opportunities to speed up my application using offscreen images

I have only just begun but I'll report back as I speed things up!

What tools do YOU use to audit your websites?


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2018 Scott Hanselman. All rights reserved.
     

Updating jQuery-based Lazy Image Loading to IntersectionObserver

$
0
0

The Hanselminutes Tech PodcastFive years ago I implemented "lazy loading" of the 600+ images on my podcast's archives page (I don't like paging, as a rule) over here https://www.hanselminutes.com/episodes. I did it with jQuery and a jQuery Plugin. It was kind of messy and gross from a purist's perspective, but it totally worked and has easily saved me (and you) hundreds of dollars in bandwidth over the years. The page is like 9 or 10 megs if you load 600 images, not to mention you're loading 600 freaking images.

Fast-forward to 2018, and there's the "Intersection Observer API" that's supported everywhere but Safari and IE, well, because, Safari and IE, sigh. We will return to that issue in a moment.

Following Dean Hume's blog post on the topic, I start with my images like this. I don't populate src="", but instead hold the Image URL in the HTML5 data- bucket of data-src. For src, I can use the nothing grey.gif or just style and color the image grey.

<a href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~https://www.hanselman.com/626/christine-spangs-open-source-journey-from-teen-oss-contributor-to-cto-of-nylas" class="showCard">
    <img data-src="https://images.hanselminutes.com/images/626.jpg" 
         class="lazy" src="https://www.hanselman.com/images/grey.gif" width="212" height="212" alt="Christine Spang&#x27;s Open Source Journey from Teen OSS Contributor to CTO of Nylas" />
    <span class="shownumber">626</span>                
    <div class="overlay title">Christine Spang&#x27;s Open Source Journey from Teen OSS Contributor to CTO of Nylas</div>
</a>
<a href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~https://www.hanselman.com/625/a-new-sega-megadrivegenesis-game-in-2018-with-1995-tools-with-tanglewoods-matt-phillips" class="showCard">
    <img data-src="https://images.hanselminutes.com/images/625.jpg" 
         class="lazy" src="https://www.hanselman.com/images/grey.gif" width="212" height="212" alt="A new Sega Megadrive/Genesis Game in 2018 with 1995 Tools with Tanglewood&#x27;s Matt Phillips" />
    <span class="shownumber">625</span>                
    <div class="overlay title">A new Sega Megadrive/Genesis Game in 2018 with 1995 Tools with Tanglewood&#x27;s Matt Phillips</div>
</a>

Then, if the images get within 50px intersecting the viewPort (I'm scrolling down) then I load them:

// Get images of class lazy
const images = document.querySelectorAll('.lazy');
const config = {
  // If image gets within 50px go get it
  rootMargin: '50px 0px',
  threshold: 0.01
};
let observer = new IntersectionObserver(onIntersection, config);
  images.forEach(image => {
    observer.observe(image);
  });

Now that we are watching it, we need to do something when it's observed.

function onIntersection(entries) {
  // Loop through the entries
  entries.forEach(entry => {
    // Are we in viewport?
    if (entry.intersectionRatio > 0) {
      // Stop watching and load the image
      observer.unobserve(entry.target);
      preloadImage(entry.target);
    }
  });
}

If the browser (IE, Safari, Mobile Safari) doesn't support IntersectionObserver, we can do a few things. I *could* fall back to my old jQuery technique, although it would involve loading a bunch of extra scripts for those browsers, or I could just load all the images in a loop, regardless, like:

if (!('IntersectionObserver' in window)) {
    loadImagesImmediately(images);
} else {...}

Dean's examples are all "Vanilla JS" and require no jQuery, no plugins, no polyfills WITH browser support. There are also some IntersectionObserver helper libraries out there like Cory Dowdy's IOLazy. Cory's is a nice simple wrapper and is super easy to implement. Given I want to support iOS Safari as well, I am using a polyfill to get the support I want from browsers that don't have it natively.

<!-- intersection observer polyfill -->
<script src="https://cdn.polyfill.io/v2/polyfill.min.js?features=IntersectionObserver"></script>

Pollyfill.io is a lovely site that gives you just the fills you need (or those you need AND request) tailored to your browser. Try GETting the URL above in Chrome. You'll see it's basically empty as you don't need it. Then hit it in IE, and you'll get the polyfill. The official IntersectionObserver polyfill is at the w3c.

At this point I've removed jQuery entirely from my site and I'm just using an optional polyfill plus browser support that didn't exist when I started my podcast site. Fewer moving parts means a cleaner, leaner, simpler site!

Go subscribe to the Hanselminutes Podcast today! We're on iTunes, Spotify, Google Play, and even Twitter!


Sponsor: Announcing Raygun APM! Now you can monitor your entire application stack, with your whole team, all in one place. Learn more!



© 2018 Scott Hanselman. All rights reserved.
     

Retrogaming on original consoles in HDMI on a budget

$
0
0

Just a few of my consoles. There's a LOT off screen.My sons (10 and 12) and I have been enjoying Retrogaming as a hobby of late. Sure there's a lot of talk of 4k 60fps this and that, but there's amazing stories in classing video games. From The Legend of Zelda (all of them) to Ico and Shadow of the Colossus, we are enjoying playing games across every platform. Over the years we've assembled quite the collection of consoles, most purchased at thrift stores.

Initially I started out as a purist, wanting to play each game on the original console unmodified. I'm not a fan of emulators for a number of reasons. I don't particularly like the idea of illegal ROM come up and I'd like to support the original game creators. Additionally, if I can support a small business by purchasing original game cartridges or CDs, I prefer to do that as well. However, the kids and I have come up with somewhat of a balance in our console selection.

For example, we enjoy the Hyperkin Retron 5 in that it lets us play NES, Famicom, SNES, Super Famicom, Genesis, Mega Drive, Game Boy, Game Boy Color, & Game Boy over 5 category ports. with one additional adapter, it adds Game Gear, Master System, and Master System Cards. It uses emulators at its heart, but it requires the use of the original game cartridges. However, the Hyperkin supports all the original controllers - many of which we've found at our local thrift store - which strikes a nice balance between the old and the new. Best of all, it uses HDMI as its output plug which makes it super easy to hook up to our TV.

The prevalence of HDMI as THE standard for getting stuff onto our Living Room TV has caused me to dig into finding HDMI solutions for as many of my systems as possible. Certainly you CAN use a Composite Video Adapter to HDMI to go from the classic Yellow/White/Red connectors to HDMI but prepare for disappointment. By the time it gets to your 4k flat panel it's gonna be muddy and gross. These aren't upscalers. They can't clean an analog signal. More on that in a moment because there are LAYERS to these solutions.

Some are simple, and I recommend these (cheap products, but they work great) adapters:

  • Wii to HDMI Adapter - The Wii is a very under-respected console and has a TON of great games. In the US you can find a Wii at a thrift store for $20 and there's tens of millions of them out there. This simple little adapter will get you very clean 480i or 480p HDMI with audio. Combine that with the Wii's easily soft-modded operating system and you've got the potential for a multi-system emulator as well.
  • PS2 to HDMI Adapter - This little (cheap) adapter will get you HTMI output as well, although it's converted off the component Y Cb/Pb Cr/Pr signal coming out. It also needs USB Power so you may end up leaching that off the PS2 itself. One note - even though every PS2 can also play PS1 games, those games output 240p and this adapter won't pick it up, so be prepared to downgrade depend on the game. But, if you use a Progressive Scan 16:9 Widescreen game like God of War you'll be very pleased with the result.
  • Nintendo N64 - THIS is the most difficult console so far to get HDMI output from. There ARE solutions but they are few and far between and often out of stock. There's an RGB mod that will get you clean Red/Green/Blue outputs but not HDMI. You'll need to get the mod and then either do the soldering yourself or find a shop to do it for you. The holy grail is the UltraHDMI Mod but I have yet to find one and I'm not sure I want to pay $150 for it if I do.
    • The cheapest and easiest thing you can and should do with an N64 is get a Composite & C-Video converter box. This box will also do basic up-scaling as well, but remember, this isn't going to create pixels that aren't already there.
  • Dreamcast - There is an adapter from Akura that will get you all the way to HDMI but it's $85 and it's just for Dreamcast. I chose instead to use a Dreamcast to VGA cable, as the Dreamcast can do VGA natively, then a powered VGA to HDMI box. It doesn't upscale, but rather passes the original video resolution to your panel for upscaling. In my experience this is a solid budget compromise.

If you're ever in or around Portland/Beaverton, Oregon, I highly recommend you stop by Retro Game Trader. Their selection and quality is truly unmatched. One of THE great retro game stores on the west coast of the US.

The games and systems at Retro Game Trader are amazing Retro Game Trader has shelves upon shelves of classic games

For legal retrogames on a budget, I also enjoy the new "mini consoles" you've likely heard a lot about, all of which support HDMI output natively!

  • Super NES Classic (USA or Europe have different styles) - 21 classic games, works with HDMI, includes controllers
  • NES Classic - Harder to find but they are out there. 30 classic games, plus controllers. Tiny!
  • Atari Flashback 8 - 120 games, 2 controllers AND 2 paddles!
  • C64 Mini - Includes Joystick and 64 games AND supports a USB Keyboard so you can program in C64 Basic

8bitdo-sfc30-pro-controller-gamepad-538219.4In the vein of retrogaming, but not directly related, I wanted to give a shootout to EVERYTHING that the 8BitDo company does. I have three of their controllers and they are amazing. They get constant firmware updates, and particularly the 8Bitdo SF30 Pro Controller is amazing as it works on Windows, Mac, Android, and Nintendo Switch. It pairs perfectly with the Switch, I use it on the road with my laptop as an "Xbox" style controller and it always Just Works. Amazing product.

If you want the inverse - the ability to use your favorite controllers with your Windows, Mac, or Raspberry Pi, check out their Wireless Adapter. You'll be able to pair all your controllers and use them on your PC - Xbox One S/X Bluetooth controller, PS4, PS3, Wii Mote, Wii U Pro wirelessly on Nintendo Switch with DS4 Motion and Rumble features! NOTE: I am NOT affiliated with 8BitDo at all, I just love their products.

We are having a ton of fun doing this. You'll always be on the lookout for old and classic games at swap meets, garage sales, and friends' houses. There's RetroGaming conventions and Arcades (like Ground Kontrol in Portland) and an ever-growing group of new friends and enthusiasts.

This post uses Amazon Referral Links and I'll use the few dollars I get from YOU uses them to buy retro games for the kids! Also, go subscribe to the Hanselminutes Podcast today! We're on iTunes, Spotify, Google Play, and even Twitter! Check out the episode where Matt Phillips from Tanglewood Games uses a 1995 PC to great a NEW Sega Megadrive/Genesis game in 2018!


Sponsor: Announcing Raygun APM! Now you can monitor your entire application stack, with your whole team, all in one place. Learn more!



© 2018 Scott Hanselman. All rights reserved.
     

How to setup Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows

$
0
0

This commit was signed with a verified signature.This week in obscure blog titles, I bring you the nightmare that is setting up Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows. This is one of those "it's good for you" things like diet and exercise and setting up 2 Factor Authentication. I just want to be able to sign my code commits to GitHub so I might avoid people impersonating my Git Commits (happens more than you'd think and has happened recently.) However, I also was hoping to make it more security buy using a YubiKey NEO security key. They're happy to tell you that it supports a BUNCH of stuff that you have never heard of like Yubico OTP, OATH-TOTP, OATH-HOTP, FIDO U2F, OpenPGP, Challenge-Response. I am most concerned with it acting like a Smart Card that holds a PGP (Pretty Good Privacy) key since the YubiKey can look like a "PIV (Personal Identity Verification) Smart Card."

NOTE: I am not a security expert. Let me know if something here is wrong (be nice) and I'll update it. Note also that there are a LOT of guides out there. Some are complete and encyclopedic, some include recommendations and details that are "too much," but this one was my experience. This isn't The Bible On The Topic but rather  what happened with me and what I ran into and how I got past it. Until this is Super Easy (TM) on Windows, there's gonna be guides like this.

As with all things security, there is a balance between Capital-S Secure with offline air-gapped what-nots, and Ease Of Use with tools like Keybase. It depends on your tolerance, patience, technical ability, and if you trust any online services. I like Keybase and trust them so I'm starting there with a Private Key. You can feel free to get/generate your key from wherever makes you happy and secure.

Welcome to Keybase.io

I use Windows and I like it, so if you want to use a Mac or Linux this blog post likely isn't for you. I love and support you and your choice though. ;)

Make sure you have a private PGP key that has your Git Commit Email Address associated with it

I download and installed (and optionally donated) a copy of Gpg4Win here.

Take your private key - either the one you got from Keybase or one you generated locally - and make sure that your UID (your email address that you use on GitHub) is a part of it. Here you can see mine is not, yet. That could be the main email or might be an alias or "uid" that you'll add.

Certs in Kleopatra

If not, as in my case since I'm using a key from keybase, you'll need to add a new uuid to your private key. You will know you got it right when you run this command and see your email address inside it.

> gpg --list-secret-keys --keyid-format LONG


------------------------------------------------
sec# rsa4096/MAINKEY 2015-02-09 [SCEA]

uid [ultimate] keybase.io/shanselman <shanselman@keybase.io>

You can adduid in the gpg command line or you can add it in the Kleopatra GUI.

image

If not, as in my case since I'm using a key from keybase, you'll need to add a new uid to your private key. You will know you got it right when you run this command and see your email address inside it.

> gpg --list-secret-keys --keyid-format LONG


------------------------------------------------
sec# rsa4096/MAINKEY 2015-02-09 [SCEA]
uid [ultimate] keybase.io/shanselman <shanselman@keybase.io>
uid [ unknown] Scott Hanselman <scott@hanselman.com>

Then, when you make changes like this, you can export your public key and update it in Keybase.io (again, if you're using Keybase).

image

Plugin your YubiKey

When you plug your YubiKey in (assuming it's newer than 2015) it should get auto-detected and show up like this "Yubikey NEO OTP+U2F+CCID." You want it so show up as this kind of "combo" or composite device. If it's older or not in this combo mode, you may need to download the YubiKey NEO Manager and switch modes.

Setting up a YubiKey on Windows

Test that your YubiKey can be seen as a Smart Card

Go to the command line and run this to confirm that your Yubikey can be see as a smart card by the GPG command line.

> gpg --card-status

Reader ...........: Yubico Yubikey NEO OTP U2F CCID 0
Version ..........: 2.0
....

IMPORTANT: Sometimes Windows machines and Corporate Laptops have multiple smart card readers, especially if they have Windows Hello installed like my SurfaceBook2! If you hit this, you'll want to create a text file at %appdata%\gnupg\scdaemon.conf and include a reader-port that points to your YubiKey. Mine is a NEO, yours might be a 4, etc, so be aware. You may need to reboot or at least restart/kill the GPG services/background apps for it to notice you made a change.
If you want to know what string should go in that file, go to Device Manager, then View | Show Hidden Devices and look under Software Devices. THAT is the string you want. Put this in scdaemon.conf:

reader-port "Yubico Yubikey NEO OTP+U2F+CCID 0"

Yubico Yubikey NEO OTP+U2F+CCID 0

Yubikey NEO can hold keys up to 2048 bits and the Yubikey 4 can hold up to 4096 bits - that's MOAR bits! However, you might find yourself with a 4096 bit key that is too big for the Yubikey NEO. Lots of folks believe this is a limitation of the NEO that sucks and is unacceptable. Since I'm using Keybase and starting with a 4096 bit key, one solution is to make separate 2048 bit subkeys for Authentication and Signing, etc.

From the command line, edit your keys then "addkey"

> gpg --edit-key <scott@hanselman.com>

You'll make a 2048 bit Signing key and you'll want to decide if it ever expires. If it never does, also make a revocation certificate so you can revoke it at some future point.

gpg> addkey

Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only)
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all

Save your changes, and then export the keys. You can do that with Kleopatra or with the command line:

--export-secret-keys --armor KEYID

Here's a GUI view. I have my main 4096 bit key and some 2048 bit subkeys for Signing or Encryption, etc. Make as many as you like

image

LEVEL SET - It will be the public version of the 2048 bit Signing Key that we'll tell GitHub about and we'll put the private part on the YubiKey, acting as a Smart Card.

Move the signing subkey over to the YubiKey

Now I'm going to take my keychain here, select the signing one (note the ASTERISK after I type "key 1" then "keytocard" to move/store it on the YubyKey's SmartCard Signature slot. I'm using my email as a way to get to my key, but if your email is used in multiple keys you'll want to use the unique Key Id/Signature. BACK UP YOUR KEYS.

> gpg --edit-key scott@hanselman.com


gpg (GnuPG) 2.2.6; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

sec rsa4096/MAINKEY
created: 2015-02-09 expires: never usage: SCEA
trust: ultimate validity: ultimate
ssb rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
created: 2015-02-09 expires: 2023-02-07 usage: S
card-no: 0006
ssb rsa2048/KEY2
created: 2015-02-09 expires: 2023-02-07 usage: E
[ultimate] (1). keybase.io/shanselman <shanselman@keybase.io>
[ultimate] (2) Scott Hanselman <scott@hanselman.com>
gpg> toggle
gpg> key 1

sec rsa4096/MAINKEY
created: 2015-02-09 expires: never usage: SCEA
trust: ultimate validity: ultimate
ssb* rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
created: 2015-02-09 expires: 2023-02-07 usage: S
card-no: 0006
ssb rsa2048/KEY2
created: 2015-02-09 expires: 2023-02-07 usage: E
[ultimate] (1). keybase.io/shanselman <shanselman@keybase.io>
[ultimate] (2) Scott Hanselman <scott@hanselman.com>

gpg> keytocard
Please select where to store the key:
(1) Signature key
(3) Authentication key
Your selection? 1

If you're storing thing on your Smart Card, it should have a pin to protect it. Also, make sure you have a backup of your primary key (if you like) because keytocard is a destructive action.

Have you set up PIN numbers for your Smart Card?

There's a PIN and an Admin PIN. The Admin PIN is the longer one. The default admin PIN is usually ‘12345678’ and the default PIN is usually ‘123456’. You'll want to set these up with either the Kleopatra GUI "Tools | Manage Smart Cards" or the gpg command line:

>gpg --card-edit

gpg/card> admin
Admin commands are allowed
gpg/card> passwd
*FOLLOW THE PROMPTS TO SET PINS, BOTH ADMIN AND STANDARD*

Tell Git about your Signing Key Globally

Be sure to tell Git on your machine some important configuration info like your signing key, but also WHERE the gpg.exe is. This is important because git ships its own older local copy of gpg.exe and you installed a newer one!

git config --global gpg.program "c:\Program Files (x86)\GnuPG\bin\gpg.exe"

git config --global commit.gpgsign true
git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY

If you don't want to set ALL commits to signed, you can skip the commit.gpgsign=true and just include -S as you commit your code:

git commit -S -m your commit message

Test that you can sign things

if you are running Kleopatra (the noob Windows GUI) when you run gpg --card-status you'll notice the cert will turn boldface and get marked as certified.

The goal here is for you to make sure GPG for Windows knows that there's a private key on the smart card, and associates a signing Key ID with that private key so when Git wants to sign a commit, you'll get a Smart Card PIN Prompt.

Advanced: If you make SubKeys for individual things so that they might also be later revoked without torching your main private key. Using the Kleopatra tool from GPG for Windows you can explore the keys and get their IDs. You'll use those Subkey IDs in your git config to remove to your signingkey.

At this point things should look kinda like this in the Kleopatra GUI:

Multiple PGP Sub keys

Make sure to prove you can sign something by making a text file and signing it. If you get a Smart Card prompt (assuming a YubiKey) and a larger .gpg file appears, you're cool.

> gpg --sign .\quicktest.txt

> dir quic*

Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 4/18/2018 3:29 PM 9 quicktest.txt
-a---- 4/18/2018 3:38 PM 360 quicktest.txt.gpg

Now, go up into GitHub to https://github.com/settings/keys at the bottom. Remember that's GPG Keys, not SSH Keys. Make a new one and paste in your public signing key or subkey.

Note the KeyID (or the SubKey ID) and remember that one of them (either the signing one or the primary one) should be the ID you used when you set up user.signingkey in git above.

GPG Keys in GitHub

The most important thing is that:

  • the email address associated with the GPG Key
  • is the same as the email address GitHub has verified for you
  • is the same as the email in the Git Commit
    • git config --global user.email "email@example.com"

If not, double check your email addresses and make sure they are the same everywhere.

Try a signed commit

If pressing enter pops a PIN Dialog then you're getting somewhere!

Please unlock the card

Commit and push and go over to GitHub and see if your commit is Verified or Unverified. Unverified means that the commit was signed but either had an email GitHub had never seen OR that you forgot to tell GitHub about your signing public key.

Signed Verified Git Commits

Yay!

Setting up to a second (or third) machine

Once you've told Git about your signing key and you've got your signing key stored in your YubiKey, you'll likely want to set up on another machine.

  • Install GPG for Windows
    • gpg --card-status
    • Import your public key. If I'm setting up signing on another machine, I'll can import my PUBLIC certificates like this or graphically in Kleopatra.
      >gpg --import "keybase public key.asc"
      
      gpg: key *KEYID*: "keybase.io/shanselman <shanselman@keybase.io>" not changed
      gpg: Total number processed: 1
      gpg: unchanged: 1

      You may also want to run gpg --expert --edit-key *KEYID* and type "trust" to certify your key as someone (yourself) that you trust.

  • Install Git (I assume you did this) and configure GPG
    • git config --global gpg.program "c:\Program Files (x86)\GnuPG\bin\gpg.exe"
    • git config --global commit.gpgsign true
    • git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY
  • Sign something with "gpg --sign" to test
  • Do a test commit.

Finally, feel superior for 8 minutes, then realize you're really just lucky because you just followed the blog post of someone who ALSO has no clue, then go help a co-worker because this is TOO HARD.


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

HttpClientFactory for typed HttpClient instances in ASP.NET Core 2.1

$
0
0

THE HANSELMINUTES PODCASTI'm continuing to upgrade my podcast site https://www.hanselminutes.com to .NET Core 2.1 running ASP.NET Core 2.1. I'm using Razor Pages having converted my old Web Matrix Site (like 8 years old) and it's gone very smoothly. I've got a ton of blog posts queued up as I'm learning a ton. I've added Unit Testing for the Razor Pages as well as more complete Integration Testing for checking things "from the outside" like URL redirects.

My podcast has recently switched away from a custom database over to using SimpleCast and their REST API for the back end. There's a number of ways to abstract that API away as well as the HttpClient that will ultimately make the call to the SimpleCast backend. I am a fan of the Refit library for typed REST Clients and there are ways to integrate these two things but for now I'm going to use the new HttpClientFactory introduced in ASP.NET Core 2.1 by itself.

Next I'll look at implementing a Polly Handler for resilience policies to be used like Retry, WaitAndRetry, and CircuitBreaker, etc. (I blogged about Polly in 2015 - you should check it out) as it's just way to useful to not use.

HttpClient Factory lets you preconfigure named HttpClients with base addresses and default headers so you can just ask for them later by name.

public void ConfigureServices(IServiceCollection services)

{
services.AddHttpClient("SomeCustomAPI", client =>
{
client.BaseAddress = new Uri("https://someapiurl/");
client.DefaultRequestHeaders.Add("Accept", "application/json");
client.DefaultRequestHeaders.Add("User-Agent", "MyCustomUserAgent");
});
services.AddMvc();
}

Then later you ask for it and you've got less to worry about.

using System.Threading.Tasks;

using Microsoft.AspNetCore.Mvc;

namespace MyApp.Controllers
{
public class HomeController : Controller
{
private readonly IHttpClientFactory _httpClientFactory;

public HomeController(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}

public Task<IActionResult> Index()
{
var client = _httpClientFactory.CreateClient("SomeCustomAPI");
return Ok(await client.GetStringAsync("/api"));
}
}
}

I prefer a TypedClient and I just add it by type in Startup.cs...just like above except:

services.AddHttpClient<SimpleCastClient>();

Note that I could put the BaseAddress in multiple places depending on if I'm calling my own API, a 3rd party, or some dev/test/staging version. I could also pull it from config:

services.AddHttpClient<SimpleCastClient>(client => client.BaseAddress = new Uri(Configuration["SimpleCastServiceUri"]));

Again, I'll look at ways to make this even simpler AND more robust (it has no retries, etc) with Polly soon.

public class SimpleCastClient

{
private HttpClient _client;
private ILogger<SimpleCastClient> _logger;
private readonly string _apiKey;

public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
{
_client = client;
_client.BaseAddress = new Uri($"https://api.simplecast.com"); //Could also be set in Startup.cs
_logger = logger;
_apiKey = config["SimpleCastAPIKey"];
}

public async Task<List<Show>> GetShows()
{
try
{
var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
_logger.LogWarning($"HttpClient: Loading {episodesUrl}");
var res = await _client.GetAsync(episodesUrl);
res.EnsureSuccessStatusCode();
return await res.Content.ReadAsAsync<List<Show>>();
}
catch (HttpRequestException ex)
{
_logger.LogError($"An error occurred connecting to SimpleCast API {ex.ToString()}");
throw;
}
}
}

Once I have the client I can use it from another layer, or just inject it with [FromServices] whenever I have a method that needs one:

public class IndexModel : PageModel

{
public async Task OnGetAsync([FromServices]SimpleCastClient client)
{
var shows = await client.GetShows();
}
}

Or in the constructor:

public class IndexModel : PageModel

{
private SimpleCastClient _client;

public IndexModel(SimpleCastClient Client)
{
_client = Client;
}
public async Task OnGetAsync()
{
var shows = await _client.GetShows();
}
}

Another nice side effect is that HttpClients that are created from the HttpClientFactory give me free logging:

info: System.Net.Http.ShowsClient.LogicalHandler[100]

Start processing HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
System.Net.Http.ShowsClient.LogicalHandler:Information: Start processing HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
info: System.Net.Http.ShowsClient.ClientHandler[100]
Sending HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
System.Net.Http.ShowsClient.ClientHandler:Information: Sending HTTP request GET https://api.simplecast.com/v1/podcasts/shownum/episodes.json?api_key=
info: System.Net.Http.ShowsClient.ClientHandler[101]
Received HTTP response after 882.8487ms - OK
System.Net.Http.ShowsClient.ClientHandler:Information: Received HTTP response after 882.8487ms - OK
info: System.Net.Http.ShowsClient.LogicalHandler[101]
End processing HTTP request after 895.3685ms - OK
System.Net.Http.ShowsClient.LogicalHandler:Information: End processing HTTP request after 895.3685ms - OK

It was super easy to move my existing code over to this model, and I'll keep simplifying AND adding other features as I learn more.


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Adding Resilience and Transient Fault handling to your .NET Core HttpClient with Polly

$
0
0

b30f5128-181e-11e6-8780-bc9e5b17685eLast week while upgrading my podcast site to ASP.NET Core 2.1 and .NET. Core 2.1 I moved my Http Client instances over to be created by the new HttpClientFactory. Now I have a single central place where my HttpClient objects are created and managed, and I can set policies as I like on each named client.

It really can't be overstated how useful a resilience framework for .NET Core like Polly is.

Take some code like this that calls a backend REST API:

public class SimpleCastClient

{
private HttpClient _client;
private ILogger<SimpleCastClient> _logger;
private readonly string _apiKey;

public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
{
_client = client;
_client.BaseAddress = new Uri($"https://api.simplecast.com");
_logger = logger;
_apiKey = config["SimpleCastAPIKey"];
}

public async Task<List<Show>> GetShows()
{
var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
var res = await _client.GetAsync(episodesUrl);
return await res.Content.ReadAsAsync<List<Show>>();
}
}

Now consider what it takes to add things like

  • Retry n times - maybe it's a network blip
  • Circuit-breaker - Try a few times but stop so you don't overload the system.
  • Timeout - Try, but give up after n seconds/minutes
  • Cache - You asked before!
    • I'm going to do a separate blog post on this because I wrote a WHOLE caching system and I may be able to "refactor via subtraction."

If I want features like Retry and Timeout, I could end up littering my code. OR, I could put it in a base class and build a series of HttpClient utilities. However, I don't think I should have to do those things because while they are behaviors, they are really cross-cutting policies. I'd like a central way to manage HttpClient policy!

Enter Polly. Polly is an OSS library with a lovely Microsoft.Extensions.Http.Polly package that you can use to combine the goodness of Polly with ASP.NET Core 2.1.

As Dylan from the Polly Project says:

HttpClientFactory in ASPNET Core 2.1 provides a way to pre-configure instances of HttpClient which apply Polly policies to every outgoing call.

I just went into my Startup.cs and changed this

services.AddHttpClient<SimpleCastClient>();

to this (after adding "using Polly;" as a namespace)

services.AddHttpClient<SimpleCastClient>().

AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.RetryAsync(2));

and now I've got Retries. Change it to this:

services.AddHttpClient<SimpleCastClient>().

AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.CircuitBreakerAsync(
handledEventsAllowedBeforeBreaking: 2,
durationOfBreak: TimeSpan.FromMinutes(1)
));

And now I've got CircuitBreaker where it backs off for a minute if it's broken (hit a handled fault) twice!

I like AddTransientHttpErrorPolicy because it automatically handles Http5xx's and Http408s as well as the occasional System.Net.Http.HttpRequestException. I can have as many named or typed HttpClients as I like and they can have all kinds of specific policies with VERY sophisticated behaviors. If those behaviors aren't actual Business Logic (tm) then why not get them out of your code?

Go read up on Polly at https://githaub.com/App-vNext/Polly and check out the extensive samples at https://github.com/App-vNext/Polly-Samples/tree/master/PollyTestClient/Samples.

Even though it works great with ASP.NET Core 2.1 (best, IMHO) you can use Polly with .NET 4, .NET 4.5, or anything that's compliant with .NET Standard 1.1.

Gotchas

A few things to remember. If you are POSTing to an endpoint and applying retries, you want that operation to be idempotent.

"From a RESTful service standpoint, for an operation (or service call) to be idempotent, clients can make that same call repeatedly while producing the same result."

But everyone's API is different. What would happen if you applied a Polly Retry Policy to an HttpClient and it POSTed twice? Is that backend behavior compatible with your policies? Know what the behavior you expect is and plan for it. You may want to have a GET policy and a post one and use different HttpClients. Just be conscious.

Next, think about Timeouts. HttpClient's have a Timeout which is "all tries overall timeout" while a TimeoutPolicy inside a Retry is "timeout per try." Again, be aware.

Thanks to Dylan Reisenberger for his help on this post, along with Joel Hulen! Also read more about HttpClientFactory on Steve Gordon's blog and learn more about HttpClientFactory and Polly on the Polly project site.


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Adding Cross-Cutting Memory Caching to an HttpClientFactory in ASP.NET Core with Polly

$
0
0

5480805977_27d92598ca_oCouple days ago I Added Resilience and Transient Fault handling to your .NET Core HttpClient with Polly. Polly provides a way to pre-configure instances of HttpClient which apply Polly policies to every outgoing call. That means I can say things like "Call this API and try 3 times if there's any issues before you panic," as an example. It lets me move the cross-cutting concerns - the policies - out of the business part of the code and over to a central location or module, or even configuration if I wanted.

I've been upgrading my podcast of late at https://www.hanselminutes.com to ASP.NET Core 2.1 and Razor Pages with SimpleCast for the back end. Since I've removed SQL Server as my previous data store and I'm now sitting entirely on top of a third party API I want to think about how often I call this API. As a rule, there's no reason to call it any more often that a change might occur.

I publish a new show every Thursday at 5pm PST, so I suppose I could cache the feed for 7 days, but sometimes I make description changes, add links, update titles, etc. The show gets many tens of thousands of hits per episode so I definitely don't want to abuse the SimpleCast API, so I decided that caching for 4 hours seemed reasonable.

I went and wrote a bunch of caching code on my own. This is fine and it works and has been in production for a few months without any issues.

A few random notes:

  • Stuff is being passed into the Constructor by the IoC system built into ASP.NET Core
    • That means the HttpClient, Logger, and MemoryCache are handed to this little abstraction. I don't new them up myself
  • All my "Show Database" is, is a GetShows()
    • That means I have TestDatabase that implements IShowDatabase that I use for some Unit Tests. And I could have multiple implementations if I liked.
  • Caching here is interesting.
    • Sure I could do the caching in just a line or two, but a caching double check is more needed that one often realizes.
    • I check the cache, and if I hit it, I am done and I bail. Yay!
    • If not, Let's wait on a semaphoreSlim. This a great, simple way to manage waiting around a limited resource. I don't want to accidentally have two threads call out to the SimpleCast API if I'm literally in the middle of doing it already.
      • "The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short."
    • So I check again inside that block to see if it showed up in the cache in the space between there and the previous check. Doesn't hurt to be paranoid.
    • Got it? Cool. Store it away and release as we finally the try.

Don't copy paste this. My GOAL is to NOT have to do any of this, even though it's valid.

public class ShowDatabase : IShowDatabase

{
private readonly IMemoryCache _cache;
private readonly ILogger _logger;
private SimpleCastClient _client;

public ShowDatabase(IMemoryCache memoryCache,
ILogger<ShowDatabase> logger,
SimpleCastClient client)
{
_client = client;
_logger = logger;
_cache = memoryCache;
}

static SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1);

static HttpClient client = new HttpClient();

public async Task<List<Show>> GetShows()
{
Func<Show, bool> whereClause = c => c.PublishedAt < DateTime.UtcNow;

var cacheKey = "showsList";
List<Show> shows = null;

//CHECK and BAIL - optimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
_logger.LogDebug($"Cache HIT: Found {cacheKey}");
return shows.Where(whereClause).ToList();
}

await semaphoreSlim.WaitAsync();
try
{
//RARE BUT NEEDED DOUBLE PARANOID CHECK - pessimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
_logger.LogDebug($"Amazing Speed Cache HIT: Found {cacheKey}");
return shows.Where(whereClause).ToList();
}

_logger.LogWarning($"Cache MISS: Loading new shows");
shows = await _client.GetShows();
_logger.LogWarning($"Cache MISS: Loaded {shows.Count} shows");
_logger.LogWarning($"Cache MISS: Loaded {shows.Where(whereClause).ToList().Count} PUBLISHED shows");

var cacheExpirationOptions = new MemoryCacheEntryOptions();
cacheExpirationOptions.AbsoluteExpiration = DateTime.Now.AddHours(4);
cacheExpirationOptions.Priority = CacheItemPriority.Normal;

_cache.Set(cacheKey, shows, cacheExpirationOptions);
return shows.Where(whereClause).ToList(); ;
}
catch (Exception e)
{
_logger.LogCritical("Error getting episodes!");
_logger.LogCritical(e.ToString());
_logger.LogCritical(e?.InnerException?.ToString());
throw;
}
finally
{
semaphoreSlim.Release();
}
}
}

public interface IShowDatabase
{
Task<List<Show>> GetShows();
}

Again, this is great and it works fine. But the BUSINESS is in _client.GetShows() and the rest is all CEREMONY. Can this be broken up? Sure, I could put stuff in a base class, or make an extension method and bury it in there, so use Akavache or make a GetOrFetch and start passing around Funcs of "do this but check here first":

IObservable<T> GetOrFetchObject<T>(string key, Func<Task<T>> fetchFunc, DateTimeOffset? absoluteExpiration = null);

Could I use Polly and refactor via subtraction?

Per the Polly docs:

The Polly CachePolicy is an implementation of read-through cache, also known as the cache-aside pattern. Providing results from cache where possible reduces overall call duration and can reduce network traffic.

First, I'll remove all my own caching code and just make the call on every page view. Yes, I could write the Linq a few ways. Yes, this could all be one line. Yes, I like Logging.

public async Task<List<Show>> GetShows()

{
_logger.LogInformation($"Loading new shows");
List<Show> shows = await _client.GetShows();
_logger.LogInformation($"Loaded {shows.Count} shows");
return shows.Where(c => c.PublishedAt < DateTime.UtcNow).ToList(); ;
}

No caching, I'm doing The Least.

Polly supports both the .NET MemoryCache that is per process/per node, an also .NET Core's IDistributedCache for having one cache that lives somewhere shared like Redis or SQL Server. Since my podcast is just one node, one web server, and it's low-CPU, I'm not super worried about it. If Azure WebSites does end up auto-scaling it, sure, this cache strategy will happen n times. I'll switch to Distributed if that becomes a problem.

I'll add a reference to Polly.Caching.MemoryCache in my project.

I ensure I have the .NET Memory Cache in my list of services in ConfigureServices in Startup.cs:

services.AddMemoryCache();

STUCK...for now!

AND...here is where I'm stuck. I got this far into the process and now I'm either confused OR I'm in a Chicken and the Egg Situation.

Forgive me, friends, and Polly authors, as this Blog Post will temporarily turn into a GitHub Issue. Once I work through it, I'll update this so others can benefit. And I still love you; no disrespect intended.

The Polly.Caching.MemoryCache stuff is several months old, and existed (and worked) well before the new HttpClientFactory stuff I blogged about earlier.

I would LIKE to add my Polly Caching Policy chained after my Transient Error Policy:

services.AddHttpClient<SimpleCastClient>().

AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.CircuitBreakerAsync(
handledEventsAllowedBeforeBreaking: 2,
durationOfBreak: TimeSpan.FromMinutes(1)
)).
AddPolicyHandlerFromRegistry("myCachingPolicy"); //WHAT I WANT?

However, that policy hasn't been added to the Policy Registry yet. It doesn't exist! This makes me feel like some of the work that is happening in ConfigureServices() is a little premature. ConfigureServices() is READY, AIM and Configure() is FIRE/DO-IT, in my mind.

If I set up a Memory Cache in Configure, I need to use the Dependency System to get the stuff I want, like the .NET Core IMemoryCache that I put in services just now.

public void Configure(IApplicationBuilder app, IPolicyRegistry<string> policyRegistry, IMemoryCache memoryCache)

{
MemoryCacheProvider memoryCacheProvider = new MemoryCacheProvider(memoryCache);
var cachePolicy = Policy.CacheAsync(memoryCacheProvider, TimeSpan.FromMinutes(5));
policyRegistry.Add("cachePolicy", cachePolicy);
...

But at this point, it's too late! I need this policy up earlier...or I need to figure a way to tell the HttpClientFactory to use this policy...but I've been using extension methods in ConfigureServices to do that so far. Perhaps some exception methods are needed like AddCachingPolicy(). However, from what I can see:

  • This code can't work with the ASP.NET Core 2.1's HttpClientFactory pattern...yet. https://github.com/App-vNext/Polly.Caching.MemoryCache
  • I could manually new things up, but I'm already deep into Dependency Injection...I don't want to start newing things and get into scoping issues.
  • There appear to be changes between  v5.4.0 and 5.8.0. Still looking at this.
  • Bringing in the Microsoft.Extensions.Http.Polly package brings in Polly-Signed 5.8.0...

I'm likely bumping into a point in time thing. I will head to bed and return to this post in a day and see if I (or you, Dear Reader) have solved the problem in my sleep.

"code making and code breaking" by earthlightbooks - Licensed under CC-BY 2.0 - Original source via Flickr


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

The Programmer's Hindsight - Caching with HttpClientFactory and Polly Part 2

$
0
0

Hindsight - by Nate Steiner - Public DomainIn my last blog post Adding Cross-Cutting Memory Caching to an HttpClientFactory in ASP.NET Core with Polly I actually failed to complete my mission. I talked to a few people (thanks Dylan and Damian and friends) and I think my initial goal may have been wrong.

I thought I wanted "magic add this policy and get free caching" for HttpClients that come out of the new .NET Core 2.1 HttpClientFactory, but first, nothing is free, and second, everything (in computer science) is layered. Am I caching the right thing at the right layer?

The good thing that come out of explorations and discussions like this is Better Software. Given that I'm running Previews/Daily Builds of both .NET Core 2.1 (in preview as of the time of this writing) and Polly (always under active development) I realize I'm on some kind of cutting edge. The bad news (and it's not really bad) is that everything I want to do is possible it's just not always easy. For example, a lot of "hooking up" happens when one make a C# Extension Method and adds in into the ASP.NET Middleware Pipeline with "services.AddSomeStuffThatIsTediousButUseful()."

Polly and ASP.NET Core are insanely configurable, but I'm personally interested in the 80% or even the 90% case. The 10% will definitely require you/me to learn more about the internals of the system, while the 90% will ideally be abstracted away from the average developer (me).

I've had a Skype with Dylan from Polly and he's been updating the excellent Polly docs as we walk around how caching should work in an HttpClientFactory world. Fantastic stuff, go read it. I'll steal some here:

ASPNET Core 2.1 - What is HttpClient factory?

From ASPNET Core 2.1, Polly integrates with IHttpClientFactory. HttpClient factory is a factory that simplifies the management and usage of HttpClient in four ways. It:

  • allows you to name and configure logical HttpClients. For instance, you may configure a client that is pre-configured to access the github API;

  • manages the lifetime of HttpClientMessageHandlers to avoid some of the pitfalls associated with managing HttpClient yourself (the dont-dispose-it-too-often but also dont-use-only-a-singleton aspects);

  • provides configurable logging (via ILogger) for all requests and responses performed by clients created with the factory;

  • provides a simple API for adding middleware to outgoing calls, be that for logging, authorisation, service discovery, or resilience with Polly.

The Microsoft early announcement speaks more to these topics, and Steve Gordon's pair of blog posts (1; 2) are also an excellent read for deeper background and some great worked examples.

Polly and Polly policies work great with ASP.NET Core 2.1 and integrated nicely. I'm sure it will integrate even more conveniently with a few smart Extension Methods to abstract away the hard parts so we can fall into the "pit of success."

Caching with Polly and HttpClient

Here's where it gets interesting. To me. Or, you, I suppose, Dear Reader, if you made it this far into a blog post (and sentence) with too many commas.

This is a salient and important point:

Polly is generic (not tied to Http requests)

Now, this is where I got in trouble:

Caching with Polly CachePolicy in a DelegatingHandler caches at the HttpResponseMessage level

I ended up caching an HttpResponseMessage...but it has a "stream" inside it at HttpResponseMessage.Content. It's meant to be read once. Not cached. I wasn't caching a string, or some JSON, or some deserialized JSON objects, I ended up caching what's (effectively) an ephemeral one-time object and then de-serializing it every time. I mean, it's cached, but why am I paying the deserialization cost on every Page View?

The Programmer's Hindsight: This is such a classic programming/coding experience. Yesterday this was opaque and confusing. I didn't understand what was happening or why it was happening. Today - with The Programmer's Hindsight - I know exactly where I went wrong and why. Like, how did I ever think this was gonna work? ;)

As Dylan from Polly so wisely points out:

It may be more appropriate to cache at a level higher-up. For example, cache the results of stream-reading and deserializing to the local type your app uses. Which, ironically, I was already doing in my original code. It just felt heavy. Too much caching and too little business. I am trying to refactor it away and make it more generic!

This is my "ShowDatabase" (just a JSON file) that wraps my httpClient

public class ShowDatabase : IShowDatabase
{
    private readonly IMemoryCache _cache;
    private readonly ILogger _logger;
    private SimpleCastClient _client;
 
    public ShowDatabase(IMemoryCache memoryCache,
            ILogger<ShowDatabase> logger,
            SimpleCastClient client)
    {
        _client = client;
        _logger = logger;
        _cache = memoryCache;
    }
 
    static SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1);
  
    public async Task<List<Show>> GetShows()
    {
        Func<Show, bool> whereClause = c => c.PublishedAt < DateTime.UtcNow;
 
        var cacheKey = "showsList";
        List<Show> shows = null;
 
        //CHECK and BAIL - optimistic
        if (_cache.TryGetValue(cacheKey, out shows))
        {
            _logger.LogDebug($"Cache HIT: Found {cacheKey}");
            return shows.Where(whereClause).ToList();
        }
 
        await semaphoreSlim.WaitAsync();
        try
        {
            //RARE BUT NEEDED DOUBLE PARANOID CHECK - pessimistic
            if (_cache.TryGetValue(cacheKey, out shows))
            {
                _logger.LogDebug($"Amazing Speed Cache HIT: Found {cacheKey}");
                return shows.Where(whereClause).ToList();
            }
 
            _logger.LogWarning($"Cache MISS: Loading new shows");
            shows = await _client.GetShows();
            _logger.LogWarning($"Cache MISS: Loaded {shows.Count} shows");
            _logger.LogWarning($"Cache MISS: Loaded {shows.Where(whereClause).ToList().Count} PUBLISHED shows");
 
            var cacheExpirationOptions = new MemoryCacheEntryOptions();
            cacheExpirationOptions.AbsoluteExpiration = DateTime.Now.AddHours(4);
            cacheExpirationOptions.Priority = CacheItemPriority.Normal;
 
            _cache.Set(cacheKey, shows, cacheExpirationOptions);
            return shows.Where(whereClause).ToList(); ;
        }
        catch (Exception e)
        {
            _logger.LogCritical("Error getting episodes!");
            _logger.LogCritical(e.ToString());
            _logger.LogCritical(e?.InnerException?.ToString());
            throw;
        }
        finally
        {
            semaphoreSlim.Release();
        }
    }
}
 
public interface IShowDatabase
{
    Task<List<Show>> GetShows();
}

I'll move a bunch of this into some generic helpers for myself, or I'll use Akavache, or I'll try another Polly Cache Policy implemented farther up the call stack! Thanks for reading my ramblings!


Sponsor: SparkPost’s cloud email APIs and C# library make it easy for you to add email messaging to your .NET applications and help ensure your messages reach your user’s inbox on time. Get a free developer account and a head start on your integration today!



© 2018 Scott Hanselman. All rights reserved.
     

Eyes wide open - Correct Caching is always hard

$
0
0

Image from Pixabay used under Creative CommonsIn my last post I talked about Caching and some of the stuff I've been doing to cache the results of a VERY expensive call to the backend that hosts my podcast.

As always, the comments are better than the post! Thanks to you, Dear Reader.

The code is below. Note that the MemoryCache is a singleton, but within the process. It is not (yet) a DistributedCache. Also note that Caching is Complex(tm) and that thousands of pages have been written about caching by smart people. This is a blog post as part of a series, so use your head and do your research. Don't take anyone's word for it.

Bill Kempf had an excellent comment on that post. Thanks Bill! He said:

The SemaphoreSlim is a bad idea. This "mutex" has visibility different from the state it's trying to protect. You may get away with it here if this is the only code that accesses that particular key in the cache, but work or not, it's a bad practice.
As suggested, GetOrCreate (or more appropriate for this use case, GetOrCreateAsync) should handle the synchronization for you.

My first reaction was, "bad idea?! Nonsense!" It took me a minute to parse his words and absorb. Ok, it took a few hours of background processing plus I had lunch.

Again, here's the code in question. I've removed logging for brevity. I'm also deeply not interested in your emotional investment in my brackets/braces style. It changes with my mood. ;)

public class ShowDatabase : IShowDatabase

{
private readonly IMemoryCache _cache;
private readonly ILogger _logger;
private SimpleCastClient _client;

public ShowDatabase(IMemoryCache memoryCache,
ILogger<ShowDatabase> logger,
SimpleCastClient client){
_client = client;
_logger = logger;
_cache = memoryCache;
}

static SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1);

public async Task<List<Show>> GetShows() {
Func<Show, bool> whereClause = c => c.PublishedAt < DateTime.UtcNow;

var cacheKey = "showsList";
List<Show> shows = null;

//CHECK and BAIL - optimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
return shows.Where(whereClause).ToList();
}

await semaphoreSlim.WaitAsync();
try
{
//RARE BUT NEEDED DOUBLE PARANOID CHECK - pessimistic
if (_cache.TryGetValue(cacheKey, out shows))
{
return shows.Where(whereClause).ToList();
}

shows = await _client.GetShows();

var cacheExpirationOptions = new MemoryCacheEntryOptions();
cacheExpirationOptions.AbsoluteExpiration = DateTime.Now.AddHours(4);
cacheExpirationOptions.Priority = CacheItemPriority.Normal;

_cache.Set(cacheKey, shows, cacheExpirationOptions);
return shows.Where(whereClause).ToList(); ;
}
catch (Exception e) {
throw;
}
finally {
semaphoreSlim.Release();
}
}
}

public interface IShowDatabase {
Task<List<Show>> GetShows();
}

SemaphoreSlim IS very useful. From the docs:

The System.Threading.Semaphore class represents a named (systemwide) or local semaphore. It is a thin wrapper around the Win32 semaphore object. Win32 semaphores are counting semaphores, which can be used to control access to a pool of resources.

The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short. SemaphoreSlim relies as much as possible on synchronization primitives provided by the common language runtime (CLR). However, it also provides lazily initialized, kernel-based wait handles as necessary to support waiting on multiple semaphores. SemaphoreSlim also supports the use of cancellation tokens, but it does not support named semaphores or the use of a wait handle for synchronization.

And my use of a Semaphore here is correct...for some definitions of the word "correct." ;) Back to Bill's wise words:

You may get away with it here if this is the only code that accesses that particular key in the cache, but work or not, it's a bad practice.

Ah! In this case, my cacheKey is "showsList" and I'm "protecting" it with a lock and double-check. That lock/check is fine and appropriate HOWEVER I have no guarantee (other than I wrote the whole app) that some other thread is also accessing the same IMemoryCache (remember, process-scoped singleton) at the same time. It's protected only within this function!

Here's where it gets even more interesting.

  • I could make my own IMemoryCache, wrap things up, and then protect inside with my own TryGetValues...but then I'm back to checking/doublechecking etc.
  • However, while I could lock/protect on a key...what about the semantics of other cached values that may depend on my key. There are none, but you could see a world where there are.

Yes, we are getting close to making our own implementation of Redis here, but bear with me. You have to know when to stop and say it's correct enough for this site or project BUT as Bill and the commenters point out, you also have to be Eyes Wide Open about the limitations and gotchas so they don't bite you as your app expands!

The suggestion was made to use the GetOrCreateAsync() extension method for MemoryCache. Bill and other commenters said:

As suggested, GetOrCreate (or more appropriate for this use case, GetOrCreateAsync) should handle the synchronization for you.

Sadly, it doesn't work that way. There's no guarantee (via locking like I was doing) that the factory method (the thing that populates the cache) won't get called twice. That is, someone could TryGetValue, get nothing, and continue on, while another thread is already in line to call the factory again.

public static async Task<TItem> GetOrCreateAsync<TItem>(this IMemoryCache cache, object key, Func<ICacheEntry, Task<TItem>> factory)

{
if (!cache.TryGetValue(key, out object result))
{
var entry = cache.CreateEntry(key);
result = await factory(entry);
entry.SetValue(result);
// need to manually call dispose instead of having a using
// in case the factory passed in throws, in which case we
// do not want to add the entry to the cache
entry.Dispose();
}

return (TItem)result;
}

Is this the end of the world? Not at all. Again, what is your project's definition of correct? Computer science correct? Guaranteed to always work correct? Spec correct? Mostly works and doesn't crash all the time correct?

Do I want to:

  • Actively and aggressively avoid making my expensive backend call at the risk of in fact having another part of the app make that call anyway?
    • What I am doing with my cacheKey is clearly not a "best practice" although it works today.
  • Accept that my backend call could happen twice in short succession and the last caller's thread would ultimately populate the cache.
    • My code would become a dozen lines simpler, have no process-wide locking, but also work adequately. However, it would be naïve caching at best. Even ConcurrentDictionary has no guarantees - "it is always possible for one thread to retrieve a value, and another thread to immediately update the collection by giving the same key a new value."

What a fun discussion. What are your thoughts?


Sponsor: SparkPost’s cloud email APIs and C# library make it easy for you to add email messaging to your .NET applications and help ensure your messages reach your user’s inbox on time. Get a free developer account and a head start on your integration today!



© 2018 Scott Hanselman. All rights reserved.
     

Announcing .NET Core 2.1 RC 1 Go Live AND .NET Core 3.0 Futures

$
0
0

I just got back from the Microsoft BUILD Conference where Scott Hunter and I announced both .NET Core 2.1 RC1 AND talked about .NET Core 3.0 in the future.

.NET Core 2.1 RC1

First, .NET Core 2.1's Release Candidate is out. This one has a Go Live license and it's very close to release.

You can download and get started with .NET Core 2.1 RC 1, on Windows, macOS, and Linux:

You can see complete details of the release in the .NET Core 2.1 RC 1 release notes. Related instructions, known issues, and workarounds are included in releases notes. Please report any issues you find in the comments or at dotnet/core #1506. ASP.NET Core 2.1 RC 1 and Entity Framework 2.1 RC 1 are also releasing today. You can develop .NET Core 2.1 apps with Visual Studio 2017 15.7, Visual Studio for Mac 7.5, or Visual Studio Code.

Here's a deep dive on the performance benefits which are SIGNIFICANT. It's also worth noting that you can get 2x+ speed improvements for your builds/compiles, but and to adopt the .NET Core 2.1 RC SDK while continuing to target earlier .NET Core releases, like 2.0 for the Runtime.

  • Go Live - You can put this version in production and get support.
  • Alpine Support - There are docker images at 2.1-sdk-alpine and 2.1-runtime-alpine.
  • ARM Support - We can compile on Raspberry Pi now! .NET Core 2.1 is supported on Raspberry Pi 2+. It isn’t supported on the Pi Zero or other devices that use an ARMv6 chip. .NET Core requires ARMv7 or ARMv8 chips, like the ARM Cortex-A53. There are even Docker images for ARM32
  • Brotli Support - new lossless compression algo for the web.
  • Tons of new Crypto Support.
  • Source Debugging from NuGet Packages (finally!) called "SourceLink."
  • .NET Core Global Tools:
dotnet tool install -g dotnetsay
dotnetsay

In fact, if you have Docker installed go try an ASP.NET Sample:

docker pull microsoft/dotnet-samples:aspnetapp
docker run --rm -it -p 8000:80 --name aspnetcore_sample microsoft/dotnet-samples:aspnetapp

.NET Core 3.0

This is huge. You'll soon be able to take your existing WinForms and WPF app (I did this with a 12 year old WPF app!) and swap out the underlying runtime. That means you can run WinForms and WPF on .NET Core 3 on Windows.

Why is this cool?

  • WinForms/WPF apps can but self-contained and run in a single folder.

No need to install anything, just xcopy deploy. WinFormsApp1 can't affect WPFApp2 because they can each target their own .NET Core 3 version. Updates to the .NET Framework on Windows are system-wide and can sometimes cause problems with legacy apps. You'll now have total control and update apps one at at time and they can't affect each other. C#, F# and VB already work with .NET Core 2.0. You will be able to build desktop applications with any of those three languages with .NET Core 3.

Secondly, you'll get to use all the new C# 7.x+ (and beyond) features sooner than ever. .NET Core moves fast but you can pick and choose the language features and libraries you want. For example, I can update BabySmash (my .NET 3.5 WPF app) to .NET Core 3.0 and use new C# features AND bring in UWP Controls that didn't exist when BabySmash was first written! WinForms and WPF apps will also get the new lightweight csproj format. More details here and a full video below.

  • Compile to a single EXE

Even more, why not compile the whole app into a single EXE. I can make BabySmash.exe and it'll just work. No install, everything self-contained.

.NET Core 3 will still be cross platform, but WinForms and WPF remain "W is for Windows" - the runtime is swappable, but they still P/Invoke into the Windows APIs. You can look elsewhere for .NET Core cross-platform UI apps with frameworks like Avalonia, Ooui, and Blazor.

Diagram showing that .NET Core will support Windows UI Frameworks

You can check out the video from BUILD here. We show 2.1, 3.0, and some amazing demos like compiling a .NET app into a single exe and running it on a computer from the audience, as well as taking the 12 year old BabySmash WPF app and running it on .NET Core 3.0 PLUS adding a UWP Touch Ink Control!

Lots of cool stuff coming today AND tomorrow with open source .NET Core!


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Using LazyCache for clean and simple .NET Core in-memory caching

$
0
0

Tai Chi by Luisen Rodrigo - Used under CCI'm continuing to use .NET Core 2.1 to power my Podcast Site, and I've done a series of posts on some of the experiments I've been doing. I also upgraded to .NET Core 2.1 RC that came out this week. Here's some posts if you want to catch up:

Having a blast, if I may say so.

I've been trying a number of ways to cache locally. I have an expensive call to a backend (7-8 seconds or more, without deserialization) so I want to cache it locally for a few hours until it expires. I have a way that work very well using a SemaphoreSlim. There's some issues to be aware of but it has been rock solid. However, in the comments of the last caching post a number of people suggested I use "LazyCache."

Alastair from the LazyCache team said this in the comments:

LazyCache wraps your "build stuff I want to cache" func in a Lazy<> or an AsyncLazy<> before passing it into MemoryCache to ensure the delegate only gets executed once as you retrieve it from the cache. It also allows you to swap between sync and async for the same cached thing. It is just a very thin wrapper around MemoryCache to save you the hassle of doing the locking yourself. A netstandard 2 version is in pre-release.
Since you asked the implementation is in CachingService.cs#L119 and proof it works is in CachingServiceTests.cs#L343

Nice! Sounds like it's worth trying out. Most importantly, it'll allow me to "refactor via subtraction."

I want to have my "GetShows()" method go off and call the backend "database" which is a REST API over HTTP living at SimpleCast.com. That backend call is expensive and doesn't change often. I publish new shows every Thursday, so ideally SimpleCast would have a standard WebHook and I'd cache the result forever until they called me back. For now I will just cache it for 8 hours - a long but mostly arbitrary number. Really want that WebHook as that's the correct model, IMHO.

LazyCache was added on my Configure in Startup.cs:

services.AddLazyCache();

Kind of anticlimactic. ;)

Then I just make a method that knows how to populate my cache. That's just a "Func" that returns a Task of List of Shows as you can see below. Then I call IAppCache's "GetOrAddAsync" from LazyCache that either GETS the List of Shows out of the Cache OR it calls my Func, does the actual work, then returns the results. The results are cached for 8 hours. Compare this to my previous code and it's a lot cleaner.

public class ShowDatabase : IShowDatabase
{
    private readonly IAppCache _cache;
    private readonly ILogger _logger;
    private SimpleCastClient _client;
    public ShowDatabase(IAppCache appCache,
            ILogger<ShowDatabase> logger,
            SimpleCastClient client)
    {
        _client = client;
        _logger = logger;
        _cache = appCache;
    }
    public async Task<List<Show>> GetShows()
    {    
        Func<Task<List<Show>>> showObjectFactory = () => PopulateShowsCache();
        var retVal = await _cache.GetOrAddAsync("shows", showObjectFactory, DateTimeOffset.Now.AddHours(8));
        return retVal;
    }
 
    private async Task<List<Show>> PopulateShowsCache()
    {
        List<Show> shows = shows = await _client.GetShows();
        _logger.LogInformation($"Loaded {shows.Count} shows");
        return shows.Where(c => c.PublishedAt < DateTime.UtcNow).ToList();
    }
}

It's always important to point out there's a dozen or more ways to do this. I'm not selling a prescription here or The One True Way, but rather exploring the options and edges and examining the trade-offs.

  • As mentioned before, me using "shows" as a magic string for the key here makes no guarantees that another co-worker isn't also using "shows" as the key.
    • Solution? Depends. I could have a function-specific unique key but that only ensures this function is fast twice. If someone else is calling the backend themselves I'm losing the benefits of a centralized (albeit process-local - not distributed like Redis) cache.
  • I'm also caching the full list and then doing a where/filter every time.
    • A little sloppiness on my part, but also because I'm still feeling this area out. Do I want to cache the whole thing and then let the callers filter? Or do I want to have GetShows() and GetActiveShows()? Dunno yet. But worth pointing out.
  • There's layers to caching. Do I cache the HttpResponse but not the deserialization? Here I'm caching the List<Shows>, complete. I like caching List<T> because a caller can query it, although I'm sending back just active shows (see above).
    • Another perspective is to use the <cache> TagHelper in Razor and cache Razor's resulting rendered HTML. There is value in caching the object graph, but I need to think about perhaps caching both List<T> AND the rendered HTML.
    • I'll explore this next.

I'm enjoying myself though. ;)

Go explore LazyCache! I'm using beta2 but there's a whole number of releases going back years and it's quite stable so far.

Lazy cache is a simple in-memory caching service. It has a developer friendly generics based API, and provides a thread safe cache implementation that guarantees to only execute your cachable delegates once (it's lazy!). Under the hood it leverages ObjectCache and Lazy to provide performance and reliability in heavy load scenarios.

For ASP.NET Core it's quick to experiment with LazyCache and get it set up. Give it a try, and share your favorite caching techniques in the comments.

Tai Chi photo by Luisen Rodrigo used under Creative Commons Attribution 2.0 Generic (CC BY 2.0), thanks!


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Building, Running, and Testing .NET Core and ASP.NET Core 2.1 in Docker on a Raspberry Pi (ARM32)

$
0
0

I love me some Raspberry Pi. They are great little learning machines and are super fun for kids to play with. Even if those kids are adults and they build a 6 node Kubernetes Raspberry Pi Cluster.

Open source .NET Core runs basically everywhere - Windows, Mac, and a dozen Linuxes. However, there is an SDK (that compiles and builds) and a Runtime (that does the actual running of your app). In the past, the .NET Core SDK (to be clear, the ability to "dotnet build") wasn't supported on ARMv7/ARMv8 chips like the Raspberry Pi. Now it is.

.NET Core is now supported on Linux ARM32 distros, like Raspbian and Ubuntu!

Note: .NET Core 2.1 is supported on Raspberry Pi 2+. It isn’t supported on the Pi Zero or other devices that use an ARMv6 chip. .NET Core requires ARMv7 or ARMv8 chips, like the ARM Cortex-A53. Folks on the Azure IoT Edge team use the .NET Core Bionic ARM32 Docker images to support developers writing C# with Edge devices.

There's two ways to run .NET Core on a Raspberry Pi.

One, use Docker. This is literally the fastest and easiest way to get .NET Core up and running on a Pi. It sounds crazy but Raspberry Pis are brilliant little Docker container capable systems. You can do it in minutes, truly. You can install Docker quickly on a Raspberry Pi with just:

curl -sSL https://get.docker.com | sh

sudo usermod -aG docker pi

After installing Docker you'll want to log in and out. You might want to try a quick sample to make sure .NET Core runs! You can explore the available Docker tags at https://hub.docker.com/r/microsoft/dotnet/tags/ and you can read about the .NET Core Docker samples here https://github.com/dotnet/dotnet-docker/tree/master/samples/dotnetapp

Now I can just docker run and then pass in "dotnet --info" to find out about dotnet on my Pi.

pi@raspberrypi:~ $ docker run --rm -it microsoft/dotnet:2.1-sdk dotnet --info

.NET Core SDK (reflecting any global.json):
Version: 2.1.300-rc1-008673
Commit: f5e3ddbe73

Runtime Environment:
OS Name: debian
OS Version: 9
OS Platform: Linux
RID: debian.9-x86
Base Path: /usr/share/dotnet/sdk/2.1.300-rc1-008673/

Host (useful for support):
Version: 2.1.0-rc1
Commit: eb9bc92051

.NET Core SDKs installed:
2.1.300-rc1-008673 [/usr/share/dotnet/sdk]

.NET Core runtimes installed:
Microsoft.NETCore.App 2.1.0-rc1 [/usr/share/dotnet/shared/Microsoft.NETCore.App]

To install additional .NET Core runtimes or SDKs:
https://aka.ms/dotnet-download

This is super cool. There I'm on the Raspberry Pi (RPi) and I just ask for the dotnet:2.1-sdk and because they are using "multiarch" docker files, Docker does the right thing and it just works. If you want to use .NET Core on ARM32 with Docker, you can use any of the following tags.

Note: The first three tags are multi-arch and bionic is Ubuntu 18.04. The codename stretch is Debian 9. So I'm using 2.1-sdk and it's working on my RPi, but I can be specific if I'd prefer.

  • 2.1-sdk
  • 2.1-runtime
  • 2.1-aspnetcore-runtime
  • 2.1-sdk-stretch-arm32v7
  • 2.1-runtime-stretch-slim-arm32v7
  • 2.1-aspnetcore-runtime-stretch-slim-arm32v7
  • 2.1-sdk-bionic-arm32v7
  • 2.1-runtime-bionic-arm32v7
  • 2.1-aspnetcore-runtime-bionic-arm32v7

Try one in minutes like this:

docker run --rm microsoft/dotnet-samples:dotnetapp

Here it is downloading the images...

Docker on a Raspberry Pi

In previous versions of .NET Core's Dockerfiles it would fail if you were running an x64 image on ARM:

standard_init_linux.go:190: exec user process caused "exec format error"

Different processors! But with multiarch per https://github.com/dotnet/announcements/issues/14 Kendra from Microsoft it just works with 2.1.

Docker has a multi-arch feature that microsoft/dotnet-nightly recently started utilizing. The plan is to port this to the official microsoft/dotnet repo shortly. The multi-arch feature allows a single tag to be used across multiple machine configurations. Without this feature each architecture/OS/platform requires a unique tag. For example, the microsoft/dotnet:1.0-runtime tag is based on Debian and microsoft/dotnet:1.0-runtime-nanoserver if based on Nano Server. With multi-arch there will be one common microsoft/dotnet:1.0-runtime tag. If you pull that tag from a Linux container environment you will get the Debian based image whereas if you pull that tag from a Windows container environment you will get the Nano Server based image. This helps provide tag uniformity across Docker environments thus eliminating confusion.

In these examples above I can:

  • Run a preconfigured app within a Docker image like:
    • docker run --rm microsoft/dotnet-samples:dotnetapp
  • Run dotnet commands within the SDK image like:
    • docker run --rm -it microsoft/dotnet:2.1-sdk dotnet --info
  • Run an interactive terminal within the SDK image like:
    • docker run --rm -it microsoft/dotnet:2.1-sdk

As a quick example, here I'll jump into a container and new up a quick console app and run it, just to prove I can. This work will be thrown away when I exit the container.

pi@raspberrypi:~ $ docker run --rm -it microsoft/dotnet:2.1-sdk

root@063f3c50c88a:/# ls
bin boot dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
root@063f3c50c88a:/# cd ~
root@063f3c50c88a:~# mkdir mytest
root@063f3c50c88a:~# cd mytest/
root@063f3c50c88a:~/mytest# dotnet new console
The template "Console Application" was created successfully.

Processing post-creation actions...
Running 'dotnet restore' on /root/mytest/mytest.csproj...
Restoring packages for /root/mytest/mytest.csproj...
Installing Microsoft.NETCore.DotNetAppHost 2.1.0-rc1.
Installing Microsoft.NETCore.DotNetHostResolver 2.1.0-rc1.
Installing NETStandard.Library 2.0.3.
Installing Microsoft.NETCore.DotNetHostPolicy 2.1.0-rc1.
Installing Microsoft.NETCore.App 2.1.0-rc1.
Installing Microsoft.NETCore.Platforms 2.1.0-rc1.
Installing Microsoft.NETCore.Targets 2.1.0-rc1.
Generating MSBuild file /root/mytest/obj/mytest.csproj.nuget.g.props.
Generating MSBuild file /root/mytest/obj/mytest.csproj.nuget.g.targets.
Restore completed in 15.8 sec for /root/mytest/mytest.csproj.

Restore succeeded.
root@063f3c50c88a:~/mytest# dotnet run
Hello World!
root@063f3c50c88a:~/mytest# dotnet exec bin/Debug/netcoreapp2.1/mytest.dll
Hello World!

If you try it yourself, you'll note that "dotnet run" isn't very fast. That's because it does a restore, build, and run. Compilation isn't super quick on these tiny devices. You'll want to do as little work as possible. Rather than a "dotnet run" all the time, I'll do a "dotnet build" then a "dotnet exec" which is very fast.

If you're doing to do Docker and .NET Core, I can't stress enough how useful the resources are over at https://github.com/dotnet/dotnet-docker.

Building .NET Core Apps with Docker

Develop .NET Core Apps in a Container

  • Develop .NET Core Applications - This sample shows how to develop, build and test .NET Core applications with Docker without the need to install the .NET Core SDK.
  • Develop ASP.NET Core Applications - This sample shows how to develop and test ASP.NET Core applications with Docker without the need to install the .NET Core SDK.

Optimizing Container Size

ARM32 / Raspberry Pi

I found the samples to be super useful...be sure to dig into the Dockerfiles themselves as it'll give you a ton of insight into how to structure your own files. Being able to do Multistage Dockerfiles is crucial when building on a small device like a RPi. You want to do as little work as possible and let Docker cache as many layers with its internal "smarts." If you're not thoughtful about this, you'll end up wasting 10x the time building image layers every build.

Dockerizing a real ASP.NET Core Site with tests!

Can I take my podcast site and Dockerize it and build/test/run it on a Raspberry Pi? YES.

FROM microsoft/dotnet:2.1-sdk AS build

WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY hanselminutes.core/*.csproj ./hanselminutes.core/
COPY hanselminutes.core.tests/*.csproj ./hanselminutes.core.tests/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/hanselminutes.core
RUN dotnet build


FROM build AS testrunner
WORKDIR /app/hanselminutes.core.tests
ENTRYPOINT ["dotnet", "test", "--logger:trx"]


FROM build AS test
WORKDIR /app/hanselminutes.core.tests
RUN dotnet test


FROM build AS publish
WORKDIR /app/hanselminutes.core
RUN dotnet publish -c Release -o out


FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime
WORKDIR /app
COPY --from=publish /app/hanselminutes.core/out ./
ENTRYPOINT ["dotnet", "hanselminutes.core.dll"]

Love it. Now I can "docker build ." on my Raspberry Pi. It will restore, test, and build. If the tests fail, the Docker build will fail.

See how there's an extra section up there called "testrunner" and then after it is "test?" That testrunner section is a no-op. It sets an ENTRYPOINT but it is never used...yet. The ENTRYPOINT is an implicit run if it is the last line in the Dockerfile. That's there so I can "Run up to it" if I want to.

I can just build and run like this:

docker build -t podcast .

docker run --rm -it -p 8000:80 podcast

NOTE/GOTCHA: Note that the "runtime" image is microsoft/dotnet:2.1-aspnetcore-runtime, not microsoft/dotnet:2.1-runtime. That aspnetcore one pre-includes the binaries I need for running an ASP.NET app, that way I can just include a single reference to "<PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.0-rc1-final" />" in my csproj. If didn't use the aspnetcore-runtime base image, I'd need to manually pull in all the ASP.NET Core packages that I want. Using the base image might make the resulting image files larger, but it's a balance between convenience and size. It's up to you. You can manually include just the packages you need, or pull in the "Microsoft.AspNetCore.App" meta-package for convenience. My resulting "podcast" image ended up 205megs, so not to bad, but of course if I wanted I could trim in a number of ways.

Or, if I JUST want test results from Docker, I can do this! That means I can run the tests in the Docker container, mount a volume between the Linux container and (theoretical) Window host, and then open the .trx resulting file in Visual Studio!

docker build --pull --target testrunner -t podcast:test .

docker run --rm -v D:\github\hanselminutes-core\TestResults:/app/hanselminutes.core.tests/TestResults podcast:test

Check it out! These are the test results from the tests that ran within the Linux Container:

XUnit Tests from within a Docker Container on Linux viewed within Visual Studio on Windows

Here's the result. I've now got my Podcast website running in Docker on an ARM32 Raspberry Pi 3 with just an hours' work (writing the Dockerfile)!

It's my podcast site running under Docker on .NET Core 2.1 on a Raspberry Pi

Second - did you make it this far down? - You can just install the .NET Core 2.1 SDK "on the metal." No Docker, just get the tar.gz and set it up. Looking at the RPi ARM32v7 Dockerfile, I can install it on the metal like this:

$ sudo apt-get -y update

$ sudo apt-get -y install libunwind8 gettext
$ wget https://dotnetcli.blob.core.windows.net/dotnet/Sdk/2.1.300-rc1-008673/dotnet-sdk-2.1.300-rc1-008673-linux-arm.tar.gz
$ sudo mkdir /opt/dotnet
$ sudo tar -xvf dotnet-sdk-latest-linux-arm.tar.gz -C /opt/dotnet
$ sudo ln -s /opt/dotnet/dotnet /usr/local/bin
$ dotnet --info

Cross-platform for the win!


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Installing PowerShell Core on a Raspberry Pi (powered by .NET Core)

$
0
0

PowerShell Core on a Raspberry Pi!Earlier this week I set up .NET Core and Docker on a Raspberry Pi and found that I could run my podcast website quite easily on a Pi. Check that post out as there's a lot going on. I can test within a Linux Container and output the test results to the host and then open them in VS. I also explored a reasonably complex Dockerfile that is both multiarch and multistage. I can reliably build and test my website either inside a container or on the bare metal of Windows or Linux. Very fun.

As primarily a Windows developer I have lots of batch/cmd files like "test.bat" or "dockerbuild.bat." They start as little throwaway bits of automation but as the project grows inevitably more complex.

I'm not interested in "selling" anyone PowerShell. If you like bash, use bash, it's lovely, as are shell scripts. PowerShell is object-oriented in its pipeline, moving lists of real objects as standard output. They are different and most importantly, they can live together. Just like you might call Python scripts from bash, you can call PowerShell scripts from bash, or vice versa. Another tool in our toolkits.

PS /home/pi> Get-Process | Where-Object WorkingSet -gt 10MB


NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
------ ----- ----- ------ -- -- -----------
0 0.00 10.92 890.87 917 917 docker-containe
0 0.00 35.64 1,140.29 449 449 dockerd
0 0.00 10.36 0.88 1272 037 light-locker
0 0.00 20.46 608.04 1245 037 lxpanel
0 0.00 69.06 32.30 3777 749 pwsh
0 0.00 31.60 107.74 647 647 Xorg
0 0.00 10.60 0.77 1279 037 zenity
0 0.00 10.52 0.77 1280 037 zenity

Bash and shell scripts are SUPER powerful. It's a whole world. But it is text based (or json for some newer things) so you're often thinking about text more.

pi@raspberrypidotnet:~ $ ps aux | sort -rn -k 5,6 | head -n6

root 449 0.5 3.8 956240 36500 ? Ssl May17 19:00 /usr/bin/dockerd -H fd://
root 917 0.4 1.1 910492 11180 ? Ssl May17 14:51 docker-containerd --config /var/run/docker/containerd/containerd.toml
root 647 0.0 3.4 155608 32360 tty7 Ssl+ May17 1:47 /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
pi 1245 0.2 2.2 153132 20952 ? Sl May17 10:08 lxpanel --profile LXDE-pi
pi 1272 0.0 1.1 145928 10612 ? Sl May17 0:00 light-locker
pi 1279 0.0 1.1 145020 10856 ? Sl May17 0:00 zenity --warning --no-wrap --text

You can take it as far as you like. For some it's intuitive power, for others, it's baroque.

pi@raspberrypidotnet:~ $ ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }'

0.00 Mb COMMAND
161.14 Mb /usr/bin/dockerd -H fd://
124.20 Mb docker-containerd --config /var/run/docker/containerd/containerd.toml
78.23 Mb lxpanel --profile LXDE-pi
66.31 Mb /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
61.66 Mb light-locker

Point is, there's choice. Here's a nice article about PowerShell from the perspective of a Linux user. Can I install PowerShell on my Raspberry Pi (or any Linux machine) and use the same scripts in both places? YES.

For many years PowerShell was a Windows-only thing that was part of the closed Windows ecosystem. In fact, here's video of me nearly 12 years ago (I was working in banking) talking to Jeffrey Snover about PowerShell. Today, PowerShell is open source up at https://github.com/PowerShell with lots of docs and scripts, also open source. PowerShell is supported on Windows, Mac, and a half-dozen Linuxes. Sound familiar? That's because it's powered (ahem) by open source cross platform .NET Core. You can get PowerShell Core 6.0 here on any platform.

Don't want to install it? Start it up in Docker in seconds with

docker run -it microsoft/powershell

Sweet. How about Raspbian on my ARMv7 based Raspberry Pi? I was running Raspbian Jessie and PowerShell is supported on Raspbian Stretch (newer) so I upgraded from Jesse to Stretch (and tidied up and did the firmware while I'm at it) with:

$ sudo apt-get update

$ sudo apt-get upgrade
$ sudo apt-get dist-upgrade
$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list
$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list.d/raspi.list
$ sudo apt-get update && sudo apt-get upgrade -y
$ sudo apt-get dist-upgrade -y
$ sudo rpi-update

Cool. Now I'm on Raspbian Stretch on my Raspberry Pi 3. Let's install PowerShell! These are just the most basic Getting Started instructions. Check out GitHub for advanced and detailed info if you have issues with prerequisites or paths.

NOTE: Here I'm getting PowerShell Core 6.0.2. Be sure to check the releases page for newer releases if you're reading this in the future. I've also used 6.1.0 (in preview) with success. The next 6.1 preview will upgrade to .NET Core 2.1. If you're just evaluating, get the latest preview as it'll have the most recent bug fixes.

$ sudo apt-get install libunwind8

$ wget https://github.com/PowerShell/PowerShell/releases/download/v6.0.2/powershell-6.0.2-linux-arm32.tar.gz
$ mkdir ~/powershell
$ tar -xvf ./powershell-6.0.2-linux-arm32.tar.gz -C ~/powershell
$ sudo ln -s ~/powershell/pwsh /usr/bin/pwsh
$ sudo ln -s ~/powershell/pwsh /usr/local/bin/powershell
$ powershell

Lovely.

GOTCHA: Because I upgraded from Jessie to Stretch, I ran into a bug where libssl1.0.0 is getting loaded over libssl1.0.2. This is a complex native issue with interaction between PowerShell and .NET Core 2.0 that's being fixed. Only upgraded machines like mind will it it, but it's easily fixed with sudo apt-get remove libssl1.0.0

Now this means my PowerShell build scripts can work on both Windows and Linux. This is a deeply trivial example (just one line) but note the "shebang" at the top that lets Linux know what a *.ps1 file is for. That means I can keep using bash/zsh/fish on Raspbian, but still "build.ps1" or "test.ps1" on any platform.

#!/usr/local/bin/powershell

dotnet watch --project .\hanselminutes.core.tests test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov

Here's a few totally random but lovely PowerShell examples:

PS /home/pi> Get-Date | Select-Object -Property * | ConvertTo-Json

{
"DisplayHint": 2,
"DateTime": "Sunday, May 20, 2018 5:55:35 AM",
"Date": "2018-05-20T00:00:00+00:00",
"Day": 20,
"DayOfWeek": 0,
"DayOfYear": 140,
"Hour": 5,
"Kind": 2,
"Millisecond": 502,
"Minute": 55,
"Month": 5,
"Second": 35,
"Ticks": 636623925355021162,
"TimeOfDay": {
"Ticks": 213355021162,
"Days": 0,
"Hours": 5,
"Milliseconds": 502,
"Minutes": 55,
"Seconds": 35,
"TotalDays": 0.24693868190046295,
"TotalHours": 5.9265283656111105,
"TotalMilliseconds": 21335502.1162,
"TotalMinutes": 355.59170193666665,
"TotalSeconds": 21335.502116199998
},
"Year": 2018
}

You can take PowerShell objects to and from Objects, Hashtables, JSON, etc.

PS /home/pi> $hash | ConvertTo-Json

{
"Shape": "Square",
"Color": "Blue",
"Number": 1
}
PS /home/pi> $hash = @{ Number = 1; Shape = "Square"; Color = "Blue"}
PS /home/pi> $hash

Name Value
---- -----
Shape Square
Color Blue
Number 1


PS /home/pi> $hash | ConvertTo-Json
{
"Shape": "Square",
"Color": "Blue",
"Number": 1
}

Here's a nice one from MCPMag:

PS /home/pi> $URI = "https://query.yahooapis.com/v1/public/yql?q=select  * from weather.forecast where woeid in (select woeid from geo.places(1) where  text='{0}, {1}')&format=json&env=store://datatables.org/alltableswithkeys"  -f 'Omaha','NE'

PS /home/pi> $Data = Invoke-RestMethod -Uri $URI
PS /home/pi> $Data.query.results.channel.item.forecast|Format-Table

code date day high low text
---- ---- --- ---- --- ----
39 20 May 2018 Sun 62 56 Scattered Showers
30 21 May 2018 Mon 78 53 Partly Cloudy
30 22 May 2018 Tue 88 61 Partly Cloudy
4 23 May 2018 Wed 89 67 Thunderstorms
4 24 May 2018 Thu 91 68 Thunderstorms
4 25 May 2018 Fri 92 69 Thunderstorms
34 26 May 2018 Sat 89 68 Mostly Sunny
34 27 May 2018 Sun 85 65 Mostly Sunny
30 28 May 2018 Mon 85 63 Partly Cloudy
47 29 May 2018 Tue 82 63 Scattered Thunderstorms

Or a one-liner if you want to be obnoxious.

PS /home/pi> (Invoke-RestMethod -Uri  "https://query.yahooapis.com/v1/public/yql?q=select  * from weather.forecast where woeid in (select woeid from geo.places(1) where  text='Omaha, NE')&format=json&env=store://datatables.org/alltableswithkeys").query.results.channel.item.forecast|Format-Table

Example: This won't work on Linux as it's using Windows specific AIPs, but if you've got PowerShell on your Windows machine, try out this one-liner for a cool demo:

iex (New-Object Net.WebClient).DownloadString("http://bit.ly/e0Mw9w")

Thoughts?


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

Real Browser Integration Testing with Selenium Standalone, Chrome, and ASP.NET Core 2.1

$
0
0

I find your lack of tests disturbingBuckle up kids, this is nuts and I'm probably doing it wrong. ;) And it's 2am and I wrote this fast. I'll come back tomorrow and fix the spelling.

I want to have lots of tests to make sure my new podcast site is working well. As mentioned before, I've been updating the site to ASP.NET Core 2.1.

Here's some posts if you want to catch up:

I've been doing my testing with XUnit and I want to test in layers.

Basic Unit Testing

Simply create a Razor Page's Model in memory and call OnGet or WhateverMethod. At this point you are NOT calling Http, there is no WebServer.

public IndexModel pageModel;


public IndexPageTests()
{
var testShowDb = new TestShowDatabase();
pageModel = new IndexModel(testShowDb);
}

[Fact]
public async void MainPageTest()
{
// FAKE HTTP GET "/"
IActionResult result = await pageModel.OnGetAsync(null, null);

Assert.NotNull(result);
Assert.True(pageModel.OnHomePage); //we are on the home page, because "/"
Assert.Equal(16, pageModel.Shows.Count()); //home page has 16 shows showing
Assert.Equal(620, pageModel.LastShow.ShowNumber); //last test show is #620
}

Moving out a layer...

In-Memory Testing with both Client and Server using WebApplicationFactory

Here we are starting up the app and calling it with a client, but the "HTTP" of it all is happening in memory/in process. There are no open ports, there's no localhost:5000. We can still test HTTP semantics though.

public class TestingFunctionalTests : IClassFixture<WebApplicationFactory<Startup>>

{
public HttpClient Client { get; }
public ServerFactory<Startup> Server { get; }

public TestingFunctionalTests(ServerFactory<Startup> server)
{
Client = server.CreateClient();
Server = server;
}

[Fact]
public async Task GetHomePage()
{
// Arrange & Act
var response = await Client.GetAsync("/");

// Assert
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
}
...
}

Testing with a real Browser and real HTTP using Selenium Standalone and Chrome

THIS is where it gets interesting with ASP.NET Core 2.1 as we are going to fire up both the complete web app, talking to the real back end (although it could talk to a local test DB if you want) as well as a real headless version of Chrome being managed by Selenium Standalone and talked to with the WebDriver. It sounds complex, but it's actually awesome and super useful.

First I add references to Selenium.Support and Selenium.WebDriver to my Test project:

dotnet add reference "Selenium.Support"

dotnet add reference "Selenium.WebDriver"

Make sure you have node and npm then you can get Selenium Standalone like this:

npm install -g selenium-standalone@latest

selenium-standalone install

Chrome is being controlled by automated test softwareSelenium, to be clear, puts your browser on a puppet's strings. Even Chrome knows it's being controlled! It's using the (soon to be standard, but clearly defacto standard) WebDriver protocol. Imagine if your browser had a localhost REST protocol where you could interrogate it and click stuff! I've been using Selenium for over 11 years. You can even test actual Windows apps (not in the browser) with WinAppDriver/Appium but that's for another post.

Now for this part, bare with me because my ServerFactory class I'm about to make is doing two things. It's setting up my ASP.NET Core 2. 1 app and actually running it so it's listening on https://localhost:5001. It's assuming a few things that I'll point out. It also (perhaps questionable) is launching Selenium Standalone from within its constructor. Questionable, to be clear, and there's others ways to do this, but this is VERY simple.

If it offends you, remembering that you do need to start Selenium Standalone with "selenium-standalone start" you could do it OUTSIDE your test in a script.

Perhaps do the startup/teardown work in a PowerShell or Shell script. Start it up, save the process id, then stop it when you're done. Note I'm also doing checking code coverage here with Coverlet but that's not related to Selenium - I could just "dotnet test."

#!/usr/local/bin/powershell

$SeleniumProcess = Start-Process "selenium-standalone" -ArgumentList "start" -PassThru
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .\hanselminutes.core.tests
Stop-Process -Id $SeleniumProcess.Id

Here my SeleniumServerFactory is getting my Browser and Server ready.

SIDEBAR NOTE: I want to point out that this is NOT perfect and it's literally the simplest thing possible to get things working. It's my belief, though, that there are some problems here and that I shouldn't have to fake out the "new TestServer" in CreateServer there. While the new WebApplicationFactory is great for in-memory unit testing, it should be just as easy to fire up your app and use a real port for things like Selenium testing. Here I'm building and starting the IWebHostBuilder myself (!) and then making a fake TestServer only to satisfy the CreateServer method, which I think should not have a concrete class return type. For testing, ideally I could easily get either an "InMemoryWebApplicationFactory" and a "PortUsingWebApplicationFactory" (naming is hard). Hopefully this is somewhat clear and something that can be easily adjusted for ASP.NET Core 2.1.x.

My app is configured to listen on both http://localhost:5000 and https://localhost:5001, so you'll note where I'm getting that last value (in an attempt to avoid hard-coding it). We also are sure to stop both Server and Brower in Dispose() at the bottom.

public class SeleniumServerFactory<TStartup> : WebApplicationFactory<Startup> where TStartup : class

{
public string RootUri { get; set; } //Save this use by tests

Process _process;
IWebHost _host;

public SeleniumServerFactory()
{
ClientOptions.BaseAddress = new Uri("https://localhost"); //will follow redirects by default

_process = new Process() {
StartInfo = new ProcessStartInfo {
FileName = "selenium-standalone",
Arguments = "start",
UseShellExecute = true
}
};
_process.Start();
}

protected override TestServer CreateServer(IWebHostBuilder builder)
{
//Real TCP port
_host = builder.Build();
_host.Start();
RootUri = _host.ServerFeatures.Get<IServerAddressesFeature>().Addresses.LastOrDefault(); //Last is https://localhost:5001!

//Fake Server we won't use...this is lame. Should be cleaner, or a utility class
return new TestServer(new WebHostBuilder().UseStartup<TStartup>());
}

protected override void Dispose(bool disposing)
{
        base.Dispose(disposing);
        if (disposing) {
            _host.Dispose();
_process.CloseMainWindow(); //Be sure to stop Selenium Standalone
        }
    }
}

But what does a complete series of tests look like? I have a Server, a Browser, and an (theoretically optional) HttpClient. Focus on the Browser and Server.

At the point when a single test starts, my site is up (the Server) and an invisible headless Chrome (the Browser) is actually being puppeted with local calls via WebDriver. All this is hidden from to you - if you want. You can certainly see Chrome (or other browsers) get automated, but what's nice about Selenium Standalone with hidden/headless Browser testing is that my unit tests now also include these complete Integration Tests and can run as part of my Continuous Integration Build.

Again, layers. I test classes, then move out and test Http Request/Response interactions, and finally the site is up and I'm making sure I can navigate, that data is loading. I'm automating the "smoke tests" that I used to do myself! And I can make as many of this a I'd like now that the scaffolding work is done.

public class SeleniumTests : IClassFixture<SeleniumServerFactory<Startup>>, IDisposable

{
public SeleniumServerFactory<Startup> Server { get; }
public IWebDriver Browser { get; }
public HttpClient Client { get; }
public ILogs Logs { get; }

public SeleniumTests(SeleniumServerFactory<Startup> server)
{
Server = server;
Client = server.CreateClient(); //weird side effecty thing here. This call shouldn't be required for setup, but it is.

var opts = new ChromeOptions();
opts.AddArgument("--headless"); //Optional, comment this out if you want to SEE the browser window
opts.SetLoggingPreference(OpenQA.Selenium.LogType.Browser, LogLevel.All);

var driver = new RemoteWebDriver(opts);
Browser = driver;
Logs = new RemoteLogs(driver); //TODO: Still not bringing the logs over yet
}

[Fact]
public void LoadTheMainPageAndCheckTitle()
{
Browser.Navigate().GoToUrl(Server.RootUri);
Assert.StartsWith("Hanselminutes Technology Podcast - Fresh Air and Fresh Perspectives for Developers", Browser.Title);
}

[Fact]
public void ThereIsAnH1()
{
Browser.Navigate().GoToUrl(Server.RootUri);

var headerSelector = By.TagName("h1");
Assert.Equal("HANSELMINUTES PODCAST\r\nby Scott Hanselman", Browser.FindElement(headerSelector).Text);
}

[Fact]
public void KevinScottTestThenGoHome()
{
Browser.Navigate().GoToUrl(Server.RootUri + "/631/how-do-you-become-a-cto-with-microsofts-cto-kevin-scott");

var headerSelector = By.TagName("h1");
var link = Browser.FindElement(headerSelector);
link.Click();
Assert.Equal(Browser.Url.TrimEnd('/'),Server.RootUri); //WTF
}

public void Dispose()
{
Browser.Dispose();
}
}

Here's a build, unit test/selenium test with code coverage actually running. I started running it from PowerShell. The black window in the back is Selenium Standalone doing its thing (again, could be hidden).

Two consoles, one with PowerShell running XUnit and one running Selenium

If I comment out the "--headless" line, I'll see this as Chrome is automated. Cool.

Chrome is loading my site and being automated

Of course, I can also run these in the .NET Core Test Explorer in either Visual Studio Code, or Visual Studio.

image

Great fun. What are your thoughts?


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     

The year of Linux on the (Windows) Desktop - WSL Tips and Tricks

$
0
0

I've been doing a ton of work in bash/zsh/fish lately - Linuxing. In case you didn't know, Windows 10 can run Linux now. Sure, you can run Linux in a VM, but it's heavy and you need a decent machine. You can run a shell under Docker, but you'll need Hyper-V and Windows 10 Pro. You can even go to https://shell.azure.com and get a terminal anywhere - I do this on my Chromebook.

But mostly I run Linux natively on Windows 10. You can go. Just open PowerShell once, as Administrator and run this command and reboot:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Then head over to the Windows Store and download Ubuntu, or Debian, or Kali, or whatever.

What's happening is you're running user-mode Linux without the Linux Kernel. The syscalls (system calls) that these un-modified Linuxes use are brokered over to Windows. Fork a Linux process? It a pico-process in Windows and shows up in the task manager.

Want to edit Windows files and edit them both in Windows and in Linux? Keep your files/code in /mnt/c/ and you can edit them with other OS. Don't use Windows to "reach into the Linux file system." There be dragons.

image

Once you've got a Linux installed (or many, as I do) you can manage then and use them in a number of ways.

Think this is stupid or foolish? Stop reading and keep running Linux and I wish you all the best. More power to you.

Want to know more? Want to look new and creative ways you can get the BEST of the Windows UI and Linux command line tools? Read on, friends.

wslconfig

WSL means "Windows Subsystem for Linux." Starting with the Windows 10 (version 1709 - that's 2017-09, the Fall Creators Update. Run "Winver" to see what you're running), you've got a command called "wslconfig." Try it out. It lists distros you have and controls which one starts when you type "bash."

Check out below that my default for "bash"  is Ubuntu 16.04, but I can run 18.04 manually if I like. See how I move from cmd into bash and exit out, then go back in, seamlessly. Again, no VM.

C:\>wslconfig /l /all

Windows Subsystem for Linux Distributions:
Ubuntu (Default)
Ubuntu-18.04
openSUSE-42
Debian
kali-rolling

C:\>wslconfig /l
Windows Subsystem for Linux Distributions:
Ubuntu (Default)
Ubuntu-18.04
openSUSE-42
Debian
kali-rolling

C:\>bash
128 → $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.4 LTS
Release: 16.04
Codename: xenial
128 → $ exit
logout

C:\>ubuntu1804
scott@SONOFHEXPOWER:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic
scott@SONOFHEXPOWER:~$

You can also pipe things into Linux commands by piping to wsl or bash like this:

C:\Users\scott\Desktop>dir | wsl grep "poop"

05/18/2018 04:23 PM <DIR> poop

If you're in Windows, running cmd.exe or powershell.exe, it's best to move into Linux by running wsl or bash as it keeps the current directory.

C:\Users\scott\Desktop>bash

129 → $ pwd
/mnt/c/Users/scott/Desktop
129 → $ exit
logout

Cool! Wondering what that number is before my Prompt? That's my blood sugar. But that's another blog post.

wsl.conf

There's a file in /etc/wsl.conf that lets you control things like if your Linux of choice automounts your Windows drives. You can also control more advanced things like if Windows autogenerates a hosts file or processes /etc/fstab. It's up to you!

Distros

There's a half dozen distros available and more coming I'm told, but YOU can also make/package your own Linux distribution for WSL with packager/distro-launcher that's open sourced at GitHub.

Docker and WSL

Everyone wants to know if you can run Docker "natively" on WSL. No, that's a little too "Inception," and as mentioned, the Linux Kernel is not present. The unmodified elf binaries work fine but Windows does the work. BUT!

You can run Docker for Windows and click "Expose daemon on localhost:2375" and since Windows and WSL/Linux share the same port space, you CAN run the Docker client very happily on WSL.

After you've got Docker for Windows running in the background, install it in Ubuntu following the regular instructions. Then update your .bashrc to force your local docker client to talk to Docker for Windows:

echo "export DOCKER_HOST=tcp://0.0.0.0:2375" >> ~/.bashrc && source ~/.bashrc

There's lots of much longer and more details "Docker on WSL" tutorials, so if you'd like more technical detail, I'd encourage you to check them out! If you use a lot of Volume Mounts, I found Nick's write-up very useful.

Now when I run "docker images" or whatever from WSL I'm talking to Docker for Windows. Works great, exactly as you'd expect and you're sharing images and containers in both worlds.

128 → $ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
podcast test 1bd29d0223da 9 days ago 2.07GB
podcast latest e9dd366f0375 9 days ago 271MB
microsoft/dotnet-samples aspnetapp 80a65a6b6f95 11 days ago 258MB
microsoft/dotnet-samples dotnetapp b3d7f438bad3 2 weeks ago 180MB
microsoft/dotnet 2.1-sdk 1f63052e44c2 2 weeks ago 1.72GB
microsoft/dotnet 2.1-aspnetcore-runtime 083ca6a642ea 2 weeks ago 255MB
microsoft/dotnet 2.1-runtime 6d25f57ea9d6 2 weeks ago 180MB
microsoft/powershell latest 708fb186511e 2 weeks ago 318MB
microsoft/azure-cli latest 92bbcaff2f87 3 weeks ago 423MB
debian jessie 4eb8376dc2a3 4 weeks ago 127MB
microsoft/dotnet-samples latest 4070d1d1e7bb 5 weeks ago 219MB
docker4w/nsenter-dockerd latest cae870735e91 7 months ago 187kB
glennc/fancypants latest e1c29c74e891 20 months ago 291MB

Fabulous.

Coding and Editing Files

I need to hit this point again. Do not change Linux files using Windows apps and tools. However, you CAN share files and edit them with both Windows and Linux by keeping code on the Windows filesystem.

For example, my work is at c:\github so it's also at /mnt/c/github. I use Visual Studio code and edit my code there (or vim, from within WSL) and I run the code from Linux. I can even run bash/wsl from within Visual Studio Code using its integrated terminal. Just hit "Ctrl+P" in Visual Studio Code and type "Select Default Shell."

Select Default Shell in Visual Studio Code

On Windows 10 Insiders edition, Windows now has a UI called "Sets" that will give you Tabbed Command Prompts. Here I am installing Ruby on Rails in Ubuntu next to two other prompts - Cmd and PowerShell. This is all default Windows - no add-ons or extra programs for this experience.

Tabbed Command Prompts

I'm using Rails as an example here because Ruby/Rails support on Windows with native extensions has historically been a challenge. There's been a group of people heroically (and thanklessly) trying to get Ruby on Rails working well on Windows, but today there is no need. It runs great on Linux under Windows.

I can also run Windows apps or tools from Linux as long as I use their full name with extension (like code.exe) or set an alias.

Here I've made an alias "code" that runs code in the current directory, then I've got VS Code running editing my new Rails app.

Editing a Rails app on Linux on Windows 10 with VS Code

I can even mix and match Windows and Linux when piping. This will likely make Windows people happy and deeply offend Linux people. Or, if you're non-denominational like me, you'll dig it!

$ ipconfig.exe | grep IPv4 | cut -d: -f2

172.21.240.1
10.159.21.24

Again a reminder: Modifying files located not under /mnt/<x> with a Windows application in WSL is not supported. But edit stuff on /mnt/x with whatever and you're cool.

Sharing Sharing Sharing

If you have Windows 10 Build 17064 or newer (run ver from windows or "cmd.exe /c /ver" from Linux) and you can even share an environment variable!

131 → $ cmd.exe /c ver


Microsoft Windows [Version 10.0.17672.1000]

There's a special environment variable called "WSLENV" that is a colon-delimited list of environment variables that should be included when launching WSL processes from Win32 or Win32 processes from WSL. Basically you give it a list of variables you want to roam/share. This will make it easy for things like cross-platform dual builds. You can even add a /p flag and it'll automatically translate paths between c:\windows style and /mnt/c/windows style.

Check out the example at the WSL Blog about how to share a GOPATH and use VSCode in Windows and run Go in both places.

You can also use a special built-in command line called "wslpath" to translate path names between Windows and WSL. This is useful if you're sharing bash scripts, doing cross-platform scripts (I have PowerShell Core scripts that run in both places) or just need to programmatically switch path types.

131 → $ wslpath "d:\github\hanselminutes-core"

/mnt/d/github/hanselminutes-core
131 → $ wslpath "c:\Users\scott\Desktop"
/mnt/c/Users/scott/Desktop

There is no man page for wslpath yet, but copied from this GitHub issue, here's the gist:

wslpath usage:

-a force result to absolute path format
-u translate from a Windows path to a WSL path (default)
-w translate from a WSL path to a Windows path
-m translate from a WSL path to a Windows path, with ‘/’ instead of ‘\\’

One final note, once you've installed a Linux distro from the Windows Store, it's on you to keep it up to date. The Windows Store won't run "apt upgrade" or ever touch your Linuxes once they have been installed. Additionally, you can have Ubuntu 1604 and 1804 installed side-by-side and it won't hurt anything.

Related Links

Are you using WSL?


Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2018 Scott Hanselman. All rights reserved.
     
Viewing all 1148 articles
Browse latest View live