Quantcast
Channel: Scott Hanselman's Blog
Viewing all 1148 articles
Browse latest View live

New prescriptive guidance for Open Source .NET Library Authors

$
0
0

Open-source library guidanceThere's a great new bunch of guidance just published representing Best Practices for creating .NET Libraries. Best of all, it was shepherded by JSON.NET's James Newton-King. Who better to help explain the best way to build and publish a .NET library than the author of the world's most popular open source .NET library?

Perhaps you've got an open source (OSS) .NET Library on your GitHub, GitLab, or Bitbucket. Go check out the open-source library guidance.

These are the identified aspects of high-quality open-source .NET libraries:

  • Inclusive - Good .NET libraries strive to support many platforms and applications.
  • Stable - Good .NET libraries coexist in the .NET ecosystem, running in applications built with many libraries.
  • Designed to evolve - .NET libraries should improve and evolve over time, while supporting existing users.
  • Debuggable - .NET libraries should use the latest tools to create a great debugging experience for users.
  • Trusted - .NET libraries have developers' trust by publishing to NuGet using security best practices.

The guidance is deep but also preliminary. As with all Microsoft Documentation these days it's open source in Markdown and on GitHub. If you've got suggestions or thoughts, share them! Be sure to sound off in the Feedback Section at the bottom of the guidance. James and the Team will be actively incorporating your thoughts.

Cross-platform targeting

Since the whole point of .NET Core and the .NET Standard is reuse, this section covers how and why to make reusable code but also how to access platform-specific APIs when needed with multi-targeting.

Strong naming

Strong naming seemed like a good idea but you should know WHY and WHEN to strong name. It all depends on your use case! Are you publishing internally or publically? What are your dependencies and who depends on you?

NuGet

When publishing on the NuGet public repository (or your own private/internal one) what do you need to know about SemVer 2.0.0? What about pre-release packages? Should you embed PDBs for easier debugging? Consider things like Dependencies, SourceLink, how and where to Publish and how Versioning applies to you and when (or if) you cause Breaking changes.

Also be sure to check out Immo's video on "Building Great Libraries with .NET Standard" on YouTube!


Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Customer Notes: Diagnosing issues under load of Web API app migrated to ASP.NET Core on Linux

$
0
0

When the engineers on the ASP.NET/.NET Core team talk to real customers about actual production problems they have, interesting stuff comes up. I've tried to capture a real customer interaction here without giving away their name or details.

The team recently had the opportunity to help a large customer of .NET investigate performance issues they’ve been having with a newly-ported ASP.NET Core 2.1 app when under load. The customer's developers are experienced with ASP.NET on Windows but in this case they needed help getting started with performance investigations with ASP.NET Core in Linux containers.

As with many performance investigations, there were a variety of issues contributing to the slowdowns, but the largest contributors were time spent garbage collecting (due to unnecessary large object allocations) and blocking calls that could be made asynchronous.

After resolving the technical and architectural issues detailed below, the customer's Web API went from only being able to handle several hundred concurrent users during load testing to being able to easily handle 3,000 and they are now running the new ASP.NET Core version of their backend web API in production.

Problem Statement

The customer recently migrated their .NET Framework 4.x ASP.NET-based backend Web API to ASP.NET Core 2.1. The migration was broad in scope and included a variety of tech changes.

Their previous version Web API (We'll call it version 1) ran as an ASP.NET application (targeting .NET Framework 4.7.1) under IIS on Windows Server and used SQL Server databases (via Entity Framework) to persist data. The new (2.0) version of the application runs as an ASP.NET Core 2.1 app in Linux Docker containers with PostgreSQL backend databases (via Entity Framework Core). They used Nginx to load balance between multiple containers on a server and HAProxy load balancers between their two main servers. The Docker containers are managed manually or via Ansible integration for CI/CD (using Bamboo).

Although the new Web API worked well functionally, load tests began failing with only a few hundred concurrent users. Based on current user load and projected growth, they wanted the web API to support at least 2,000 concurrent users. Load testing was done using Visual Studio Team Services load tests running a combination of web tests mimicking users logging in, doing the stuff of their business, activating tasks in their application, as well as pings that the Mobile App's client makes regularly to check for backend connectivity. This customer also uses New Relic for application telemetry and, until recently, New Relic agents did not work with .NET Core 2.1. Because of this, there was unfortunately no app diagnostic information to help pinpoint sources of slowdowns.

Lessons Learned

Cross-Platform Investigations

One of the most interesting takeaways for me was not the specific performance issues encountered but, instead, the challenges this customer had working in a Linux environment. The team's developers are experienced with ASP.NET on Windows and comfortable debugging in Visual Studio. Despite this, the move to Linux containers has been challenging for them.

Because the engineers were unfamiliar with Linux, they hired a consultant to help deploy their Docker containers on Linux servers. This model worked to get the site deployed and running, but became a problem when the main backend began exhibiting performance issues. The performance problems only manifested themselves under a fairly heavy load, such that they could not be reproduced on a dev machine. Up until this investigation, the developers had never debugged on Linux or inside of a Docker container except when launching in a local container from Visual Studio with F5. They had no idea how to even begin diagnosing issues that only reproduced in their staging or production environments. Similarly, their dev-ops consultant was knowledgeable about Linux infrastructure but not familiar with application debugging or profiling tools like Visual Studio.

The ASP.NET team has some documentation on using PerfCollect and PerfView to gather cross-platform diagnostics, but the customer's devs did not manage to find these docs until they were pointed out. Once an ASP.NET Core team engineer spent a morning showing them how to use PerfCollect, LLDB, and other cross-platform debugging and performance profiling tools, they were able to make some serious headway debugging on their own. We want to make sure everyone can debug .NET Core on Linux with LLDB/SOS or remotely with Visual Studio as easily as possible.

The ASP.NET Core team now believes they need more documentation on how to diagnose issues in non-Windows environments (including Docker) and the documentation that already exists needs to be more discoverable. Important topics to make discoverable include PerfCollect, PerfView, debugging on Linux using LLDB and SOS, and possibly remote debugging with Visual Studio over SSH.

Issues in Web API Code

Once we gathered diagnostics, most of the perf issues ended up being common problems in the customer’s code. 

  1. The largest contributor to the app’s slowdown was frequent Generation 2 (Gen 2) GCs (Garbage Collections) which were happening because a commonly-used code path was downloading a lot of images (product images), converting those bytes into a base64 strings, responding to the client with those strings, and then discarding the byte[] and string. The images were fairly large (>100 KB), so every time one was downloaded, a large byte[] and string had to be allocated. Because many of the images were shared between multiple clients, we solved the issue by caching the base64 strings for a short period of time (using IMemoryCache).
  2. HttpClient Pooling with HttpClientFactory
    1. When calling out to Web APIs there was a pattern of creating new HttpClient instances rather than using IHttpClientFactory to pool the clients.
    2. Despite implementing IDisposable, it is not a best practice to dispose HttpClient instances as soon as they’re out of scope as they will leave their socket connection in a TIME_WAIT state for some time after being disposed. Instead, HttpClient instances should be re-used.
  3. Additional investigation showed that much of the application’s time was spent querying PostgresSQL for data (as is common). There were several underlying issues here.
    1. Database queries were being made in a blocking way instead of being asynchronous. We helped address the most common call-sites and pointed the customer at the AsyncUsageAnalyzer to identify other async cleanup that could help.
    2. Database connection pooling was not enabled. It is enabled by default for SQL Server, but not for PostgreSQL.
      1. We re-enabled database connection pooling. It was necessary to have different pooling settings for the common database (used by all requests) and the individual shard databases which are used less frequently. While the common database needs a large pool, the shard connection pools need to be small to avoid having too many open, idle connections.
    3. The Web API had a fairly ‘chatty’ interface with the database and made a lot of small queries. We re-worked this interface to make fewer calls (by querying more data at once or by caching for short periods of time).
  4. There was also some impact from having other background worker containers on the web API’s servers consuming large amounts of CPU. This led to a ‘noisy neighbor’ problem where the web API containers didn’t have enough CPU time for their work. We showed the customer how to address this with Docker resource constraints.

Wrap Up

As shown in the graph below, at the end of our performance tuning, their backend was easily able to handle 3,000 concurrent users and they are now using their ASP.NET Core solution in production. The performance issues they saw overlapped a lot with those we’ve seen from other customers (especially the need for caching and for async calls), but proved to be extra challenging for the developers to diagnose due to the lack of familiarity with Linux and Docker environments.

Performance and Errors Charts look good, up and to the right
Throughput and Tests Charts look good, up and to the right

Some key areas of focus uncovered by this investigation were:

  • Being mindful of memory allocations to minimize GC pause times

  • Keeping long-running calls non-blocking/asynchronous

  • Minimizing calls to external resources (such as other web services or the database) with caching and grouping of requests

Hope you find this useful! Big thanks to Mike Rousos from the ASP.NET Core team for his work and analysis!


Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Dependabot for .NET Core dependency tracking in GitHub

$
0
0

Bump Microsoft.ApplicationInsights.AspNetCore from 2.5.0-beta1 to 2.5.0-beta2I've been exploring automated dependency tracking lately. I usually use my podcast's ASP.NET Core website that I host on Github as a guinea pig. I tried Nukeeper and the dotnet outdated global tool - both of which are fantastic and worth exploring.

This week I'm trying Dependbot. I have no relationship with this company. Public repos and personal account repos are free and their pricing is very clear and organization accounts start at just $15 with a free trial.

I'm really impressed with how clever Dependabot is. It's almost like a person in its behavior. Yes, I realize that's kind of the point, but it's no less surprising to see. A well-written bot is a joy to behold.

For example, here is a PR (Pull Request) where Dependbot says "Bumps Microsoft.ApplicationInsights.AspNetCore from 2.5.0-beta1 to 2.5.0-beta2."

Basic stuff, right? But that's not all.

It not only does the basics where it noticed that a version bump occurred in a NuGet package, but it also copied the release notes from that NuGet package's release on GitHub! It included links to what was fixed between versions, links to the change logs, AND a complete linked commit list. I mean, that's just lovely.

A few days later, Dependabot went and closed the PR because the dependancy had updated (I was slow) then it commented telling me this PR was superseded by another.

Superseded by #20

Dependabot, like any good bot, also includes commands you can send to it via "Chats" in GitHub PR comments. You can tell it to use specific labels, control milestones. You can also control behavior in the Dependabot Dashboard and have it automerge things like minor versions, or just lock things down to security-only updates.

All in all, it's a very smart bot that supports basically all the languages. .NET support is in Beta, but I haven't had any issues with it. You should definitely check it out. And let me tell you, once you've got everything automated you'll wonder how you ever managed before.


Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

ASP.NET Core 2.2 Parameter Transformers for clean URL generation and slugs in Razor Pages or MVC

$
0
0

I noticed that last week .NET Core 2.2 Preview 3 was released:

You can download and get started with .NET Core 2.2, on Windows, macOS, and Linux:

Docker images are available at microsoft/dotnet for .NET Core and ASP.NET Core.

.NET Core 2.2 Preview 3 can be used with Visual Studio 15.9 Preview 3 (or later), Visual Studio for Mac and Visual Studio Code.

The feature I am most stoked about in ASP.NET 2.2 is a subtle one but I remember implementing it manually many times over the last 10 years. I'm happy to see it nicely integrated into ASP.NET Core's MVC and Razor Pages patterns.

ASP.NET Core 2.2 introduces the concept of Parameter Transformers to routing. Remember there isn't a directly relationship between what's in the URL/Address bar and what's on disk. The routing subsystem handles URLs coming in from the client and routes them to Controllers, but it also generates URLs (strings) when given an Controller and Action.

So if I'm using Razor Pages and I have a file Pages/FancyPants.cshtml I can get to it by default at /FancyPants. I can also "ask" for the URL when I'm creating anchors/links in my Razor Page:

<a class="nav-link text-dark" asp-area="" asp-page="/fancypants">Fancy Pants</a>

Here I'm asking for the page. That asp-page attribute points to a logical page, not a physical file.

 

We can make an IOutboundParameterTransformer that changes URLs to a format (for example) like a WordPress standard slug in the two-words format.

public class SlugifyParameterTransformer : IOutboundParameterTransformer
{
    public string TransformOutbound(object value)
    {
        if (value == null) { return null; }
        // Slugify value
        return Regex.Replace(value.ToString(), "([a-z])([A-Z])", "$1-$2").ToLower();
    }
}

Then you let the ASP.NET Pipeline know about this transformer, either in Razor Pages...

services.AddMvc()
            .SetCompatibilityVersion(CompatibilityVersion.Version_2_2)
            .AddRazorPagesOptions(options =>
{
    options.Conventions.Add(
        new PageRouteTransformerConvention(
            new SlugifyParameterTransformer()));
});

or in ASP.NET MVC:

services.AddMvc(options =>
{
    options.Conventions.Add(new RouteTokenTransformerConvention(
                                 new SlugifyParameterTransformer()));
});

Now when I run my application, I get my routing both coming in (from the client web browser) and going out (generated via Razor pages. Here I'm hovering over the "Fancy Pants" link at the top of the page. Notice that it's generated /fancy-pants as the URL.

image

So that same code from above that generates anchor tags <a href= gives me the expected new style of URL, and I only need to change it in one location.

There is also a new service called LinkGenerator that's a singleton you can call outside the context of an HTTP call (without an HttpContext) in order to generate a URL string.

return _linkGenerator.GetPathByAction(
     httpContext,
     controller: "Home",
     action: "Index",
     values: new { id=42 });

This can be useful if you are generating URLs outside of Razor or in some Middleware. There's a lot more little subtle improvements in ASP.NET Core 2.2, but this was the one that I will find the most useful in the near term.


Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Side by Side user scoped .NET Core installations on Linux with dotnet-install.sh

$
0
0

I can run .NET Core on Windows, Mac, or a dozen Linuxes. On my Ubuntu installation I can check what version I have installed and where it is like this:

$ dotnet --version
2.1.403
$ which dotnet
/usr/bin/dotnet

If we interrogate that dotnet file we see it's a link to elsewhere:

$ ls -alogF /usr/bin/dotnet
lrwxrwxrwx 1 22 Sep 19 03:10 /usr/bin/dotnet -> ../share/dotnet/dotnet*

If we head over there we see similar stuff as we do on Windows.

Side by side DotNet installs

Basically c:\program files\dotnet is the same as /share/dotnet.

$ cd ../share/dotnet
$ ll
total 136
drwxr-xr-x 1 root root   4096 Oct  5 19:47 ./
drwxr-xr-x 1 root root   4096 Aug  1 17:44 ../
drwxr-xr-x 1 root root   4096 Feb 13  2018 additionalDeps/
-rwxr-xr-x 1 root root 105704 Sep 19 03:10 dotnet*
drwxr-xr-x 1 root root   4096 Feb 13  2018 host/
-rw-r--r-- 1 root root   1083 Sep 19 03:10 LICENSE.txt
drwxr-xr-x 1 root root   4096 Oct  5 19:48 sdk/
drwxr-xr-x 1 root root   4096 Aug  1 18:07 shared/
drwxr-xr-x 1 root root   4096 Feb 13  2018 store/
-rw-r--r-- 1 root root  27700 Sep 19 03:10 ThirdPartyNotices.txt
$ ls sdk
2.1.4  2.1.403  NuGetFallbackFolder
$ ls shared
Microsoft.AspNetCore.All  Microsoft.AspNetCore.App  Microsoft.NETCore.App
$ ls shared/Microsoft.NETCore.App/
2.0.5  2.1.5

Looking in directories works to figure out what SDKs and Runtime versions are installed, but the best way is to use the dotnet cli itself like this. 

$ dotnet --list-sdks
2.1.4 [/usr/share/dotnet/sdk]
2.1.403 [/usr/share/dotnet/sdk]
$ dotnet --list-runtimes
Microsoft.AspNetCore.All 2.1.5 [/usr/share/dotnet/shared/Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.5 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.0.5 [/usr/share/dotnet/shared/Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.5 [/usr/share/dotnet/shared/Microsoft.NETCore.App]

There's great instructions on how to set up .NET Core on your Linux machines via Package Manager here.

Note that these installs of the .NET Core SDK are installed in /usr/share. I can use the dotnet-install.sh to do non-admin installs in my own user directory.

In order to gain more control and do things more manually, you can use this shell script here: https://dot.net/v1/dotnet-install.sh and its documentation is here at docs. For Windows there is also a PowerShell version https://dot.net/v1/dotnet-install.ps1

The main usefulness of these scripts is in automation scenarios and non-admin installations. There are two scripts: One is a PowerShell script that works on Windows. The other script is a bash script that works on Linux/macOS. Both scripts have the same behavior. The bash script also reads PowerShell switches, so you can use PowerShell switches with the script on Linux/macOS systems.

For example, I can see all the current .NET Core 2.1 versions at https://www.microsoft.com/net/download/dotnet-core/2.1 and 2.2 at https://www.microsoft.com/net/download/dotnet-core/2.2 - the URL format is regular. I can see from that page that at the time of this blog post, v2.1.5 is both Current (most recent stable) and also LTS (Long Term Support).

I'll grab the install script and chmod +x it. Running it with no options will get me the latest LTS release.

$ wget https://dot.net/v1/dotnet-install.sh
--2018-10-31 15:41:08--  https://dot.net/v1/dotnet-install.sh
Resolving dot.net (dot.net)... 104.214.64.238
Connecting to dot.net (dot.net)|104.214.64.238|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 30602 (30K) [application/x-sh]
Saving to: ‘dotnet-install.sh’

I like the "-DryRun" option because it will tell you what WILL happen without doing it.

$ ./dotnet-install.sh -DryRun
dotnet-install: Payload URL: https://dotnetcli.azureedge.net/dotnet/Sdk/2.1.403/dotnet-sdk-2.1.403-linux-x64.tar.gz
dotnet-install: Legacy payload URL: https://dotnetcli.azureedge.net/dotnet/Sdk/2.1.403/dotnet-dev-ubuntu.16.04-x64.2.1.403.tar.gz
dotnet-install: Repeatable invocation: ./dotnet-install.sh --version 2.1.403 --channel LTS --install-dir <auto>

If I use the dotnet-install script can have multiple copies of the .NET Core SDK installed in my user folder at ~/.dotnet. It all depends on your PATH. Note this as I use ~/.dotnet for my .NET Core install location and run dotnet --list-sdks. Make sure you know what your PATH is and that you're getting the .NET Core you expect for your user.

$ which dotnet
/usr/bin/dotnet
$ export PATH=/home/scott/.dotnet:$PATH
$ which dotnet
/home/scott/.dotnet/dotnet
$ dotnet --list-sdks
2.1.402 [/home/scott/.dotnet/sdk]

Now I will add a few more .NET Core SDKs side-by-side with the dotnet-install.sh script. Remember again, these aren't .NET's installed with apt-get which would be system level and by run with sudo. These are user profile installed versions.

There's really no reason to do side by side at THIS level of granularity, but it makes the point.

$ dotnet --list-sdks
2.1.302 [/home/scott/.dotnet/sdk]
2.1.400 [/home/scott/.dotnet/sdk]
2.1.401 [/home/scott/.dotnet/sdk]
2.1.402 [/home/scott/.dotnet/sdk]
2.1.403 [/home/scott/.dotnet/sdk]

When you're doing your development, you can use "dotnet new globaljson" and have each path/project request a specific SDK version.

$ dotnet new globaljson
The template "global.json file" was created successfully.
$ cat global.json
{
  "sdk": {
    "version": "2.1.403"
  }
}

Hope this helps!


Sponsor: Reduce time to market and simplify IOT development using developer kits built on Intel Atom®, Intel® Core™ and Intel® Xeon® processors and tools such as Intel® System Studio and Arduino Create*



© 2018 Scott Hanselman. All rights reserved.
     

.NET Core and .NET Standard for IoT - The potential of the Meadow Kickstarter

$
0
0

I saw this Kickstarter today - Meadow: Full-stack .NET Standard IoT platform. It says that "It combines the best of all worlds; it has the power of RaspberryPi, the computing factor of an Arduino, and the manageability of a mobile app. And best part? It runs full .NET Standard on real IoT hardware."

NOTE: I don't have any relationship with the company/people behind this Kickstarter, but it seems interesting so I'm sharing it with you. As with all Kickstarters, it's not a pre-order, it's an investment that may not pan out, so always be prepared to lose your investment. I lost mine with the .NET "Agent" SmartWatch even though all signs pointed to success.

Meadow IoT KickstarterI've done IoT work on Raspberry Pis which is much easier lately with the emerging community support for ARM32, Raspberry Pis, and cool stuff happening on Windows 10 IoT. I've written on how easy it is to get running on Raspberry Pi. I was even able to get my own podcast website running on Raspberry Pi and in Docker.

This Meadow Kickstarter says it's running on the Mono Runtime and will support the .NET Standard 2.0 API. That means that you likely already know how to program to it. Most libraries on NuGet are .NET Standard compliant so a ton of open source software should "Just Work" on any solution that supports .NET Standard.

One thing that seems interesting about Meadow is this sentence: "The power of Raspberry Pi in the computing factor of an Arduino, and the manageability of a mobile app."

Raspberry Pis are full-on computers. Ardunios are small little (mostly) single-tasked devices. Microcomputer vs Microcontroller. It's overkill to have Ubuntu on a computer just to turn on a device. You usually want IoT devices to have as small a surface area as possible.

Meadow says "Meadow has been designed to run on a variety of microcontrollers, and our first board is based on STMicroelectronics' flagship STM32F7 MCU. The Meadow F7 Micro board is an embeddable module that's based on Adafruit Feather form factor." Remember, we are talking megs not gigs here. "We've paired the STM32F7 with 32MB of flash storage and 16MB of RAM, so you can run pretty much anything you can think of building." This is just a 216MHz ARM board.

Be sure to scroll all the way down to the bottom of the page as they outline risks as well as what's left to be done.

What do you think? While you are at it, check out (total coincidence) our sponsor this week is Intel IoT! They have some great developer kits.


Sponsor: Reduce time to market and simplify IOT development using developer kits built on Intel Atom®, Intel® Core™ and Intel® Xeon® processors and tools such as Intel® System Studio and Arduino Create*


© 2018 Scott Hanselman. All rights reserved.
     

Updating my ASP.NET Website from .NET 2.2 Core Preview 2 to .NET 2.2 Core Preview 3

$
0
0

I've recently returned from a month in South Africa and I was looking to unwind while the jetlagged kids sleep. I noticed that .NET Core 2.2 Preview 3 came out while I wasn't paying attention. My podcast site runs on .NET Core 2.2 Preview 2 so I thought it'd be interesting to update the site. That means I'd need to install the new SDK, update the project references, ensure it builds in Azure DevOps's CI/CD Pipeline, AND deploys and runs in Azure.

Let's see how it goes. I'm a little out of it but I'm writing this blog post AS I DO THE WORK so you'll see my train of thought with no editing.

Ok, what version of .NET Core does this machine have?

C:\Users\scott> dotnet --version

2.2.100-preview2-009404
C:\Users\scott> dotnet tool update --global dotnet-outdated
Tool 'dotnet-outdated' was successfully updated from version '2.0.0' to version '2.1.0'.

Looks like I'm on Preview 2 as I guessed. I'll take a moment and upgrade one Global Tool I love - dotnet-outdated - in case it's been updated since I've been out. Looks like it has a minor update. Dotnet Outdated is a great utility for checking references and you should absolutely be using it or another tool like NuKeeper or Dependabot.

I'll head over to https://www.microsoft.com/net/download/dotnet-core/2.2 and get .NET Core 2.2 Preview 3. I'm building on Windows but I may want to update my Linux (WSL) install and Docker images later.

All right, installed. Check it with dotnet --version to confirm it's correct:

C:\Users\scott> dotnet --version

2.2.100-preview3-009430

Let's try to build my podcast website. Note that it consists of two projects, the main website on ASP.NET Core, and Unit Tests with XUnit and Selenium.

D:\github\hanselminutes-core [main ≡]> dotnet build

Microsoft (R) Build Engine version 15.9.8-preview+g0a5001fc4d for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Restoring packages for D:\github\hanselminutes-core\hanselminutes.core.tests\hanselminutes.core.tests.csproj...
Restore completed in 80.05 ms for D:\github\hanselminutes-core\hanselminutes.core.tests\hanselminutes.core.tests.csproj.
Restore completed in 25.4 ms for D:\github\hanselminutes-core\hanselminutes.core\hanselminutes-core.csproj.
D:\github\hanselminutes-core\hanselminutes.core.tests\hanselminutes.core.tests.csproj : error NU1605: Detected package downgrade: Microsoft.AspNetCore.App from 2.2.0-preview3-35497 to 2.2.0-preview2-35157. Reference the package directly from the project to select a different version. [D:\github\hanselminutes-core\hanselminutes-core.sln]

The dotnet build fails, which make sense, because it's saying hey, you're asking for 2.2 Preview 2 but I've got Preview 3 all ready for you!

Detected package downgrade: Microsoft.AspNetCore.App from 2.2.0-preview3-35497 to 2.2.0-preview2-35157

Let's see what "dotnet outdated" says about this!

dotnet outdated says there's a few packages I need to update

Cool! I love these dependency tools and the community around them. You can see that it's noticed the Preview 2 -> Preview 3 opportunity, as well as a few other smaller minor or patch version bumps.

I can run dotnet outdated -u to automatically update the references, but I'll want to treat the "reference" of "Microsoft.AspNetCore.App" a little differently and use implicit versioning. You don't want to include a specific version - as I did - for this package.

Per the docs for .NET Core 2.1 and up:

Remove the "Version" attribute on the package reference to Microsoft.AspNetCore.App. Projects which use <Project Sdk="Microsoft.NET.Sdk.Web"> do not need to set the version. The version will be implied by the target framework and selected to best match the way ASP.NET Core 2.1 works. (See below for more information.)

Doing this also fixes the build because it picks up the latest 2.2 SDK automatically! Now I'll run my Unit Tests (with code coverage) and see how it works. Cool all tests pass (including Selenium).

88% Code Coverage

It builds locally, will it build in Azure DevOps when I check it in to GitHub?

Azure DevOps

I added a .NET Core SDK installer step when I set up my Azure Dev Ops Pipeline. This is where I'm explicitly installing a Preview version of the .NET Core SDK.

While I'm in here I noticed the Azure DevOps pipeline was using NuGet 4.4.1. I run "nuget update -self" on my local machine and got 4.7.1, so I updated that version as well to make the CI/CD pipeline reflect my own machine.

Now I'll git add, git commit (using verified/signed GitHub commits with my PGP Key and Yubikey):

D:\github\hanselminutes-core [main ≡ +0 ~2 -0 !]> git add .

D:\github\hanselminutes-core [main ≡ +0 ~2 -0 ~]> git commit -m "bump to 2.2 Preview 3"
[main 7a84bc7] bump to 2.2 Preview 3
2 files changed, 16 insertions(+), 13 deletions(-)

Add in a Git Push...and I can see the build start in Azure DevOps:

CI/CD pipeline build starting

Cool. While that's building, I'll make sure my existing Azure App Service (website) installation is ready to receive the deployment (assuming the build succeeds). Since I'm using an ASP.NET Core Preview build I'll want to make sure I have the Preview Site Extension installed, per the docs.

If I visit the Site Extensions menu item in the Azure Portal I can see I've got .NET Core 2.2 Preview 2, but there's an update available, as expected.

Update Available

I'll click this extension and then click Update. This extension's job is to make sure the App Service gets Preview versions of the .NET Core SDK. Only released (GA - general availability) SDKs are installed by default.

OK, .NET Core 2.2 is all updated in Azure, so I'll confirm that it's deployed as well in Azure DevOps. Yes, I'm deploying into Production without a net. Seriously, though, if there is an issue I'll just rollback. If I was deeply serious about downtime I'd be doing all this in Staging.

image

Successful local test, successful CI/SD build and test, successful deployment, and the site is back up now running on ASP.NET Core 2.2 Preview 3. It took about 45 min to do the work while simultaneously taking these screenshots and writing this blog post during the slow parts.

Good night everyone!


Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Terminus and FluentTerminal are the start of a world of 3rd party OSS console replacements for Windows

$
0
0

Folks have been trying to fix supercharge the console/command line on Windows since Day One. There's a ton of open source projects over the year that try to take over or improve on "conhost.exe" (the thing that handles consoles like Bash/PowerShell/cmd on Windows). Most of these 3rd party consoles have weird or subtle issues. For example, I like Hyper as a terminal but it doesn't support Ctrl-C at the command line. I use that hotkey often enough that this small bug means I just won't use that console at all.

Per the CommandLine blog:

One of those weaknesses is that Windows tries to be "helpful" but gets in the way of alternative and 3rd party Console developers, service developers, etc. When building a Console or service, developers need to be able to access/supply the communication pipes through which their Terminal/service communicates with command-line applications. In the *NIX world, this isn't a problem because *NIX provides a "Pseudo Terminal" (PTY) infrastructure which makes it easy to build the communication plumbing for a Console or service, but Windows does not...until now!

Looks like the Windows Console team is working on making 3rd party consoles better by creating this new PTY mechanism:

We've heard from many, many developers, who've frequently requested a PTY-like mechanism in Windows - especially those who created and/or work on ConEmu/Cmder, Console2/ConsoleZ, Hyper, VSCode, Visual Studio, WSL, Docker, and OpenSSH.

Very cool! Until it's ready I'm going to continue to try out new consoles. A lot of people will tell you to use the cmder package that includes ConEmu. There's a whole world of 3rd party consoles to explore. Even more fun are the choices of color schemes and fonts to explore.

For a while I was really excited about Hyper. Hyper is - wait for it - an electron app that uses HTML/CSS for the rendering of the console. This is a pretty heavyweight solution to the rendering that means you're looking at 200+ megs of memory for a console rather than 5 megs or so for something native. However, it is a clever way to just punt and let a browser renderer handle all the complex font management. For web-folks it's also totally extensible and skinnable.

As much as I like Hyper and its look, the inability to support hitting "Ctrl-C" at the command line is just too annoying. It appears it's a very well-understood issue that will ultimately be solved by the ConPTY work as the underlying issue is a deficiency in the node-pty library. It's also a long-running issue in the VS Code console support. You can watch the good work that's starting in this node-pty PR that will fix a lot of issues for node-based consoles.

Until this all fixes itself, I'm personally excited (and using) these two terminals for Windows that you may not have heard of.

Terminus

Terminus is open source over at https://github.com/Eugeny/terminus and works on any OS. It's immediately gorgeous, and while it's in alpha, it's very polished. Be sure to explore the settings and adjust things like Blur/Fluent, Themes, opacity, and fonts. I'm using FiraCode Retina with Ligatures for my console and it's lovely. You'll have to turn ligature support on explicitly under Settings | Appearance.

Terminus is a lovely console replacement

Terminus also has some nice plugins. I've added Altair, Clickable-Links, and Shell-Selector to my loadout. The shell selector makes it easy on Windows 10 to have PowerShell, Cmd, and Ubuntu/Bash open all at the same time in multiple tabs.

I did do a little editing of the default config file to set up Ctrl-T for new tab and Ctrl-W for close-tab for my personal taste.

FluentTerminal

FluentTerminal is a Terminal Emulator based on UWP. Its memory usage on my machine is about 1/3 of Terminus and under 100 megs. As a Windows 10 UWP app it looks and feels very native. It supports ALT-ENTER Fullscreen, and tabs for as many consoles as you'd like. You can right-click and color specific tabs which was a nice surprise and turned out to be useful for on-the-fly categorization.

image

FluentTerminal has a nice themes setup and includes a half-dozen to start, plus supports imports.

It's not yet in the Windows Store (perhaps because it's in active development) but you can easily download a release and install it with a PowerShell install.ps1 script.

I have found the default Keybindings very intuitive with the usual Ctrl-T and Ctrl-W tab managers already set up, as well as Shift-Ctrl-T for opening a new tab for a specific shell profile (cmd, powershell, wsl, etc).

Both of these are great new entries in the 3rd party terminal space and I'd encourage you to try them both out and perhaps get involved on their respective GitHubs! It's a great time to be doing console work on Windows 10!


Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Web Development and Advanced Techniques with Linux on Windows (WSL)

$
0
0

I've posted several times on the Windows Subsystem for Linux that allows you to run Linux on Windows 10 without a VM. Check out my YouTube on Editing code and files on Windows Subsystem for Linux on Windows 10. There's just one rule. You can mess with Windows files from Linux but you can't mess with Linux files from Windows. Otherwise, go crazy and enjoy. Here's some of my previous posts you should check out:

WSL is pretty fantastic although its disk access is slower than native Linux, I find myself using it every day. If you want to setup Linux on your Windows 10 machine, just turn it on, then head over to the Windows Store and search for "Linux."

You can turn on Linux on Windows 10 by typing "Windows Features" and checking "Windows Subsystem for Linux." Then get a Linux from the Windows Store.

If you prefer to use PowerShell and do it in one line, just do this from an Admin PowerShell prompt:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Then go get any one (or more!) of these from the Store:

When you're in a Windows shell like PowerShell or CMD you might want to run Linux and/or jump comfortably between shells. You can do that in a few ways. The best and recommended way is running "wsl.exe" as that will start up your default distro. You can also just type the name of the distro. So I can type "ubuntu" and get in there directly.

You can type "bash" but that's not recommended if you've changed shells. If you've set up zsh or fish and type bash, it's gonna still try to run bash.

Here I've typed wslconfig and you can see I've got both Ubuntu and Debian installed, with Ubuntu as the default when I type "wsl."

C:\Users\scott>wslconfig /list

Windows Subsystem for Linux Distributions:
Ubuntu-18.04 (Default)
Debian

Now that I know how to run wsl from anywhere I can even pipe stuff in and out it Linux from outside. For example here I am in cmd.exe but I'm calling commands in Linux, that come out, then back in, etc. You can mix and match however you'd like!

C:\dev>type hello.sh

echo Hello
C:\dev>wsl cat /mnt/c/dev/hello.sh | wsl fromdos | wsl /bin/sh
Hello

This means even when I'm in CMD or PowerShell I can use Linux commands that are convenient or familiar to me. For example, here I'm piping a Windows Update log file into a the Linux command sha1sum command. Note the use of - to accept standard input - even though that input is from Windows!

C:\Users\scott\Desktop>type WindowsUpdate.log | wsl sha1sum -

3b48adce8f6c9cb816e8845d824dacc0440ca1b8 -

Sweet. There's a number of nice advanced techniques if you want to make your WSL installations smarter AND automatically configured.  You can make a file in /etc/wsl.conf to affect your DNS, metadata and driving mounting.

When you are in a WSL shell, your Windows drive (your main drive) is at /mnt/c. So here is my Windows desktop as viewed from WSL:

screenfetch in WSL

I most of my dev work in /mnt/d/github for example. That way I can use VS Code from Windows but run Node/Ruby/Go/Whatever from WSL.

I keep my files on my Windows drive, edit them in VS Code, but run things in WSL. Again, never use Windows utilities to reach into and/or edit files on the WSL/Linux subsystem. Also, always been conscious of your CR/LF situation, and be real conscious if you're going to run git in both Windows and WSL.

Here's VS Code at the top, WSL/Ubuntu running Node at the bottom, and the local node app running in Edge on Windows on the lower right. We are sharing file systems and network port space:

Cross platform Web Dev

You can even share environment variables between WSL and Windows with a special environment variable called WSLENV. This is pretty advanced but super powerful. Read this carefully. You make a environment variable that is a list of names of other variables that you want translated between environments.

That means you can do something like this. I'm in WSL and I have an environment variable that points to a location on the filesystem. I need it to be correct in both worlds.

scott@IRONHEART:/mnt/d$ export MYLINUXPATH=/mnt/d/github/expresstest

scott@IRONHEART:/mnt/d$ export WSLENV=MYLINUXPATH/p
scott@IRONHEART:/mnt/d$ cmd.exe
D:\>echo %MYLINUXPATH%
D:\github\expresstest

Read that carefully. It's awesome and it's very configurable.

There's lots of users of WSL and many have assembled great lists of resources like Awesome-WSL by Hayden.

It's also worth pointing out that WSL is just now one console you can choose from. There's PowerShell, CMD.exe, and a half dozen Linuxes. You can even make your own custom Linux Distro for your company if you like. And there's a whole world of 3rd party Consoles that sit on top of/replace conhost.exe so you can have consoles with tabs, cool fonts, ones based on web tech, whatever! You can even choose WSL/bash as your default shell in Visual Studio Code if you'd like with Ctrl+~.

Hope this gets you started with Linux on Windows. What did I miss? Sound off in the comments.


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Compiling C# to WASM with Mono and Blazor then Debugging .NET Source with Remote Debugging in Chrome DevTools

$
0
0

Blazor quietly marches on. In case you haven't heard (I've blogged about Blazor before) it's based on a deceptively simple idea - what if we could run .NET Standard code in the browsers? No, not Silverlight, Blazor requires no plugins and doesn't introduce new UI concepts. What if we took the AOT (Ahead of Time) compilation work pioneered by Mono and Xamarin that can compile C# to Web Assembly (WASM) and added a nice UI that embraced HTML and the DOM?

Sound bonkers to you? Are you a hater? Think this solution is dumb or not for you? To the left.

For those of you who want to be wacky and amazing, consider if you can do this and the command line:

$ cat hello.cs

class Hello {
static int Main(string[] args) {
System.Console.WriteLine("hello world!");
return 0;
}
}
$ mcs -nostdlib -noconfig -r:../../dist/lib/mscorlib.dll hello.cs -out:hello.exe
$ mono-wasm -i hello.exe -o output
$ ls output
hello.exe index.html index.js index.wasm mscorlib.dll

Then you could do this in the browser...look closely on the right side there.

You can see the Mono runtime compiled to WASM coming down. Note that Blazor IS NOT compiling your app into WASM. It's sending Mono (compiled as WASM) down to the client, then sending your .NET Standard application DLLs unchanged down to run within with the context of a client side runtime. All using Open Web tools. All Open Source.

Blazor uses Mono to run .NET in the browser

So Blazor allows you to make SPA (Single Page Apps) much like the Angular/Vue/React, etc apps out there today, except you're only writing C# and Razor(HTML).

Consider this basic example.

@page "/counter"


<h1>Counter</h1>
<p>Current count: @currentCount</p>
<button class="btn btn-primary" onclick="@IncrementCount">Click me</button>

@functions {
int currentCount = 0;
void IncrementCount() {
currentCount++;
}
}

You hit the button, it calls some C# that increments a variable. That variable is referenced higher up and automatically updated. This is trivial example. Check out the source for FlightFinder for a real Blazor application.

This is stupid, Scott. How do I debug this mess? I see you're using Chrome but seriously, you're compiling C# and running in the browser with Web Assembly (how prescient) but it's an undebuggable black box of a mess, right?

I say nay nay!

C:\Users\scott\Desktop\sweetsassymollassy> $Env:ASPNETCORE_ENVIRONMENT = "Development"

C:\Users\scott\Desktop\sweetsassymollassy> dotnet run --configuration Debug
Hosting environment: Development
Content root path: C:\Users\scott\Desktop\sweetsassymollassy
Now listening on: http://localhost:5000
Now listening on: https://localhost:5001
Application started. Press Ctrl+C to shut down.

Then Win+R and run this command (after shutting down all the Chrome instances)

%programfiles(x86)%\Google\Chrome\Application\chrome.exe --remote-debugging-port=9222 http://localhost:5000

Now with your Blazor app running, hit Shift+ALT+D (or Shift+SILLYMACKEY+D) and behold.

Feel free to click and zoom in. We're at a breakpoint in some C# within a Razor page...in Chrome DevTools.

HOLY CRAP IT IS DEBUGGING C# IN CHROME

What? How?

Blazor provides a debugging proxy that implements the Chrome DevTools Protocol and augments the protocol with .NET-specific information. When debugging keyboard shortcut is pressed, Blazor points the Chrome DevTools at the proxy. The proxy connects to the browser window you're seeking to debug (hence the need to enable remote debugging).

It's just getting started. It's limited, but it's awesome. Amazing work being done by lots of teams all coming together into a lovely new choice for the open source web.


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

How to build a Wall Mounted Family Calendar and Dashboard with a Raspberry Pi and cheap monitor

$
0
0

Glanceable DashboardI love dashboards. I love Raspberry Pis (tiny $35 computers the size of a set of playing cards). And I'm cheap frugal. I found a 24" old LCD at Goodwill (a local thrift shop) and bought it but it's been sitting unused in my garage.

Then I stumbled on DakBoard. The idea is simple - A wifi connected wall display for your photos, calendar, news, weather and to-do.

The implementation is simple genius. It's a browser that starts up full screen (kiosk mode) and just sits there and updates occasionally. DakBoard provides the private webpage and tools to make that happen. You can certainly build this yourself with any number of open source tools. I chose DakBoard because it was simple, beautiful, and I was able to get the whole thing done in less than an hour. I'm sure I'll spend many hours tweaking it through. There's also the very popular MagicMIrror platform, so lots of choice and power in this space!

What are some considerations?

  • You may want to turn it off on a scheduled to save power and the screen
    • cronjob - turn it off on a schedule
    • sensor - turn it on when something (your alarm, nest, thermostat motion detector attached to GPIO, etc) detects your presence)
  • It has to act like an appliance. If you are messing with it to keep it alive, it's not an appliance, it's another computer to manage.
  • It has to just work. If my Spouse doesn't like the idea or find its not reliable, the SAF (Spouse Acceptance Factor) will be low and they'll want to get rid of it. All it takes is one "why isn't this working" and I'm dead in the water.
  • Finally - What do you want to show?

Someone asked me - "What would I want to put on my dashboard other than a calendar? I don't see why this is useful."

What would you put on a Glance-eable Display?

Family Calendar(s), movie times, temperature, news, my blood sugar, disk free on my NAS, TV schedule, family photos, commute traffic, album releases, homework due soon, family events, trips, flight status, music playing now, literally anything you want as a glance-able display. 

Glanceable Dashboard

Philosophy

You'll want to ask yourself, is this just an iPad on the wall? I'd propose not. In fact, I'd say this is a Wall Mounted Glanceable Display - a personal dashboard - not an interactive thing. I want the family and kids to just stop by, note important information and move on.

It's also worth pointing out the a horizontal monitor on the wall looks like, well, a monitor on the wall. But somehow when it's Portrait it's dramatic. It's not something we are (yet) used to seeing. I may try this out in a few ways, or even make a few of these displays!

How to Build a Raspberry Pi-based Family Calendar

It's pretty easy! I used the DakBoard Blog but I had most of the stuff already.

  • Get a $35 Raspberry Pi 3. The 3 is fast and includes Wifi so you don't need an extra adapter.
  • I like a 2.5A powersupply but some folks say you can run the Raspberry Pi off the monitor's USB power - IF that power can put out at least 1A. 500mA will likely cause instability. It depends on if you want to try to get the whole thing down to one power cable.
  • Cheap SD Card - 8 gigs is fine, but get whatever works for you. This doesn't need to be awesome.
  • A 1 foot HDMI cable. You're gonna mount the Raspberry Pi to the back of the monitor and hide it so you want the cable to be as small as possible.
  • And finally - a 24" ish (smaller is fine) LCD (IPS is nice) monitor with smallish bezels and HDMI inputs that go out to the side (NOT directly out the back) as you want this flush on the wall.
    • Think about how you'll mount it. You can take the back off the monitor and use hanging wire OR use a flush VESA mount.

Install Raspbian on the Raspberry Pi. I use Noobs to bootstrap my install as it's super fast and easy. Go through the standard setup. Make sure you've set up:

  • Wifi login
  • Timezone
  • Boot to Desktop automatically
  • install chromium via "sudo apt-get install -y rpi-chromium-mods"

Then you make sure that Chromium starts up full screen, the mouse is hidden, and we're looking at the dashboard! It's super important you don't have to touch it. It's an appliance, right?

sudo nano ~/.config/lxsession/LXDE-pi/autostart
@xset s off
@xset -dpms
@xset s noblank
@chromium-browser --noerrdialogs --incognito --kiosk http://dakboard.com/app/?p=YOUR_PRIVATE_URL

Then you can set up a cronjob if you want to turn the Pi's screen on and off on a schedule. Using rpi-hdmi.sh you can make a crontab -e that looks like this:

# Turn HDMI Off (22:00/10:00pm)
0 22 * * * /home/pi/rpi-hdmi.sh off
# Turn HDMI On (7:00/7:00am)
0 7 * * * /home/pi/rpi-hdmi.sh on 

My family uses Google Calendar (GSuite) to manage hanselman.com, but I use Outlook at work. I also have a lot of business/work crap in my calendar that the family doesn't need to see. So I have two problems here, filtering, and appointment movement between Work and Home.

My wife and kids use Google Calendar and it's their authoritative source. My work calendar is MY authoritative source, so I want to sync Outlook->Google but ONLY including Personal/Podcasts/Travel categories. I categorize in Outlook at work, and then those appointments that are appropriate for the family calendar get moved over. Then the Family Calendar dashboard includes color coordinated items for Mom, Dad, Kid1, Kid2. The kids include homework that's due as appointments.

I use the Outlook Google Calendar Sync open source project to do this calendar movement for me. It does require Outlook and is a client solution so if you have a better idea let me know.

GOTCHA: I have been using Google Calendar for YEARS. I have also been using sync tools like this for years. As such, I was noticing that sometimes DakBoard would timeout asking for my Google Calendar's ICS file. It would take minutes. So I requested it myself and it was 26 megs. It's clear that Google calendar doesn't care deeply about iCal and that's disappointing. This could easily be solved if they'd support some kind of OData like URL-based query for fromdate=, todate=. In this case, the DakBoard was getting 26 megs over and over to just show a few weeks of appointments. I literally had appointments from 2005 in the calendar. I decided that since I'd declared Outlook my authoritative source for my calendar that I'd take an archive (one time snapshot) of my iCal and then delete all my calendar items from Google Calendar and re-sync, one way, from the authoritative source, going back 1 year. I'm likely a rare case but it's worth noting in case you bump into this.

All in all, this can easily be done in a short few hours if you have a Pi and a monitor. The time will be spent making it "sanitary." Making the cables perfect, hanging it on the wall, hiding the cables, then tweaking the screen to be perfect.

Editing screens on DakBoard

DakBoard has a free option that works great, or a Premium subscription that gives you even more control. Again, it depends on your web/art ability, and your patience. This is a fun new world that I'm excited to get involved with and my family is already stoked about this new display as we enter the holiday season.


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Upgrading the DakBoard Family Calendar with Raspberry Pi Zero W and Read Only filesystem

$
0
0

Raspberry Pi Zeros are SMALLEarlier this week I built a Family Calendar using a used flat screen monitor and a Raspberry Pi 3 I had lying around and documented it in my post How to build a Wall Mounted Family Calendar and Dashboard with a Raspberry Pi and cheap monitor.

Eric Brown added two great comments (the comments on my blog are always better than the content!) He said:

  • You can save power & money by using an Pi Zero W instead.
  • This is likely overkill, but I took the time to get the Pi Zero to mount the SD card read-only and do all the writes to a RAM disk.

Eric said "RPis are surprisingly sensitive to power glitches, and will often corrupt the SD card" and that "after mounting the SD read-only, my DakBoard has been running stably for months; before doing that, it corrupted the SD card within 6 weeks."

While I haven't had any issues with my Raspberry Pis, this seemed like a fun "version 2" of the calendar to make with the kids. Worst case scenario? Now I have LCD family calendars!

You'll recall I commented about how important the Spouse Acceptance Factor is whenever introducing new technology into the house.

It has to just work. If my Spouse doesn't like the idea or find its not reliable, the SAF (Spouse Acceptance Factor) will be low and they'll want to get rid of it. All it takes is one "why isn't this working" and I'm dead in the water.

I checked Amazon and found a number of Raspberry Pi Zero W (W is for Wireless, important!) Kits for around US$20. You can see in the picture above how SMALL a Raspberry Pi Zero W is (with LEGO Miss Marvel for scale).

Get the HDMI cables as flush an sanitary as possible

If you have the cables, power supplies, and don't need the headers and extra stuff, I've seen them as low as $10. It's very important to note that a Raspberry Pi Zero W does support HDMI but it has a MINI-HDMI female connector. You'll need a mini-HDMI to HDMI adapter or a mini-HDMI to HDMI short cable.

Here's another aside. Did you know there are a LOT of different HDMI connector orientations? Sure, you could just loop a big old 6 foot HDMI cable back there, but where's the fun in that? There are micro HDMI D1,D2,D3 that describe 90 degree and 270 degree rotations of the male. If you want to be really flush, consider a cable (for example like a C2 to A2) that is usually used in drones. This would allow you to mount the Pi Zero W flush against the back of the monitor - or even better, inside the monitor or a wooden picture frame!

Dakboard

Get the Raspberry Pi Zero W on your wireless and avoid the trouble of keyboards and mice!

Pi Zero Ws are so small that they don't have a regular USB connector. There is one for power and one that is "USB OTG." If you want to connect a mouse and keyboard directly to the Zero you'll need this USB OTG Micro to Type A Cable and/or a powered USB hub.

OR!

Save money and prep your Raspberry Pi Micro SD Card with SSH turned on by default and your Wireless Network enabled by default! Then you can set it up remotely as a DakBoard/MagicMirror Family Calendar.

  • Download the Image for Raspbian Stretch. You'll want the desktop version (not Lite) because this IS a visual project, not a headless one!
  • I recommend Etcher for burning images to SD Cards. It's free.
  • Raspberry Pi Zero W and a 1A+ micro USB power supply
  • Cheap micro SD Card. They should include an adapter to plug it into your main computer to prepare.
    • Create an empty file called "ssh" on the prepared Micro SD Card before you put the card in the Raspberry Pi
    • Make a file called wpa_supplicant.conf with Linux line feeds (LF, not the default Windows CR/LF) with content like this (and your own country code)
country=us

update_config=1
ctrl_interface=/var/run/wpa_supplicant

network={
scan_ssid=1
ssid="YourNetworkSSID"
psk="NETWORKPASSWORD!"
}

This will cause the Pi to get on the network on boot up which should allow you to SSH over to it directly, thereby avoiding any trouble with keyboards and mice and the Pi Zero W.

If you DO end up wanted to connect the keyboard and mouse, you'll want a keyboard/mouse setup that is all in one with just one USB adapter or you'll need a Powered USB Hub. This should be temporary as you get the Pi prepared.

Make the Raspberry Pi Zero W readonly - after it's been configured with DakBoard

Once I had the Pi Zero W all prepared I went around the net looking for tutorials to make it readonly. You're basically causing Linux to mount the SD Card readonly and then do all writes to a RAM Disk that will ultimately be tossed whenever you (rarely) reboot. Get it perfect before you go readonly as it's a small hassle to switch back. Or you can pull the card out and mount it on your other computer then return it. Still, not awesome.

Eric from the comments pointed me to a Raspberry Pi Jesse tutorial, but I tried it and it didn't work for me, likely because I'm on Raspbian Stretch, a newer version. There's a LOT of choices and ways to do this but the best tutorial I found was on the page for Domoticz, a n open source Home Automation system which looks, as an aside, awesome and something I need to check out in the future!

For now, I followed these instructions on Setting up overlayFS on Raspberry PI (the "overlay" being the file system you'll write to but it's a fake, the writes are going to one folder and the two foldkers (one read-write and one read-only are overlaid over each other). This allowed me to make a Raspberry Pi Raspbian Stretch system Readonly on my Pi Zero W.

I followed the instructions exactly, only skipping the parts like "Modify domoticz service" that didn't apply. When I run "mount" I can see the main file system is read-only and the others are overlaid and read-write.

pi@dakboard2:~ $ mount

/dev/mmcblk0p7 on / type ext4 (ro,noatime,data=ordered)
snip!
ramdisk on /var_rw type tmpfs (rw,relatime)
ramdisk on /home_rw type tmpfs (rw,relatime)
overlay on /home type overlay (rw,relatime,lowerdir=/home_org,upperdir=/home_rw/upper,workdir=/home_rw/work)
overlay on /var type overlay (rw,relatime,lowerdir=/var_org,upperdir=/var_rw/upper,workdir=/var_rw/work)
So far so good! This will make a smaller and lower power Family Calendar that will hopefully be more reliable as well! Thanks Eric from the comments!


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

The 2018 Christmas List of Best STEM Toys for Kids

$
0
0

Hey friends! This is my FIFTH year doing a list of Great STEM Christmas Toys for Kids! Can you believe it? In case you missed them, here's the previous years' lists! Be aware I use Amazon referral links so I get a little kickback (and you support this blog!) when you use these links. I'll be using the pocket money to...wait for it...buy STEM toys for kids! So thanks in advance!

OK, let's do it!

littleBits

I've always liked littleBits but when they first came out I thought they were expensive and didn't include enough stuff. Fast forward and littleBits have dropped in price and built a whole ecosystem of littleBits that work together. This year the most fun is the littleBits Marvel Avengers Inventor Kit. At the time of this writing, this kit is 33% off at Amazon. You can built your own Iron Man (or Ironheart!) gauntlet and load it up with littleBits that can do whatever you'd like. One particularly cool thing included is an LED Matrix that you can address directly by writing code with the iOS or Android app.

littleBits Marvel Avengers Inventor Kit

Kano - Computer Kit and Wand

Both my kids love the Kano Computer Kit, now updated for 2018. It's a complete Raspberry Pi 3 kit that includes the keyboard, mouse, case, LED lights, and everything you'd need to build a Pi. This year they've branched out to the Kano Happy Potter Coding Kit that you can use to build a wand and learn to code. The "wand" is a custom PCB with codeable LEDs, buttons, and batteries that the kids put inside a wand. The wand is Bluetooth and includes lots of tech like an accelerometer, gyroscope, magnetometer, and a vibrating rumble pack. All of this tech is controllable with laptops or smart devices and code with JavaScript.

Harry Potter Kano Coding Kit and Wand

UbTech JIMU Robot - Unicornbot Kit

UbTech has a whole series of Technics-style Robot kits. There's the usual tanks and cars, but there's also some more creative and "out there" ones like this 400-piece Unicorn Robot. It includes color sensors, server motors, a DC motor, and a light up horn. It's also codeable/controllable via an iOS or Android app. Very cool!

I'd really like their Lynx Alexa controllable walking robot but it's way out of my price range. Still fun to check out though!

Unicornbot

Erector by Meccano Kits

We've found these Erector by Meccano Kits to be inexpensive and well-built. The 25-in-1 kit is great and includes a container and over 600 pieces. I like these metal kits because they feel like the ones I had in my childhood. Kids learn how to use motors, pulleys, and other explore functional motion.

Erector Set

Osmo Genius Kit for iPad

The Osmo Genius is quite clever and based on one deceptively simple idea - what if the iPad camera faced downward and could see the table in front of the child? It came with a base and a reflector that directs the front-facing camera downwards. Then the educational games are written to see what's happening on the table and provide near-instant feedback. You can start with the base kit and later optionally add kits and games.

Osmo Genius Kit for iPad

Elenco 130-in-1 Electronic Playground and Learning Center

I like classic toys and while toys with bluetooth and fancy features are cool, I want to balance it out with the classics that let you explore the physical world. These also tend to be more affordable as well.

I really like this classic electronic trainer with 130 experiments like an AM broadcast station, Electronic Organ, LED strobe light, Timer, Logic Circuits and much, much more. The 50-in-One version is just $16! Frankly all the Elenco products are fantastic.

image

Piper Computer Kit (2018 Edition)

I had this on the list last year but my kids still love it. We have the 2016 kit and it's been updated for 2018.

The Piper is a little spendy at first glance, but it's EXTREMELY complete and very thoughtfully created. Sure, you can just get a Raspberry Pi and hack on it - but the Piper is not just a Pi. It's a complete kit where your little one builds their own wooden "laptop" box (more of a luggable), and then starting with just a single button, builds up the computer. The Minecraft content isn't just vanilla Microsoft. It's custom episodic content! Custom voice overs, episodes, and challenges.

What's genius about Piper, though, is how the software world interacts with the hardware. For example, at one point you're looking for treasure on a Minecraft beach. The Piper suggests you need a treasure detector, so you learn about wiring and LEDs and wire up a treasure detector LED while it's running. Then you run your Minecraft person around while the LED blinks faster to detect treasure. It's absolute genius. Definitely a favorite in our house for the 8-12 year old set.

Piper Raspberry Pi Kit

I hope you have a great holiday season!

FYI: These Amazon links are referral links. When you use them I get a tiny percentage. It adds up to taco money for me and the kids! I appreciate you - and you appreciate me-  when you use these links to buy stuff.


Sponsor: Let top companies apply to you. Create a free profile on Hired and unlock the ability to let companies apply to you, not the other way around. Create a free profile.



© 2018 Scott Hanselman. All rights reserved.
     

On Developer Advocacy

$
0
0

TeamworkNaming things is hard. I've talked before about the term "evangelism" and my dislike for it. Evangelism, Advocacy, Developer Relations, PR, Marketing, and on and on. More and more I'm just trying to educate and maybe entertain a little. So I like Edutainment, myself, hat tip to KRS-One.

I'm getting on a plane tomorrow to go to the Microsoft Azure + AI Conference @DevIntersection and the Free Microsoft Connect 2018 Event (you can watch online all day!) and as I was packing I was struck with a few thoughts I wanted to share here.

What a privilege it is to speak about products that so many people have worked on and (hopefully) so many people will enjoy. Especially ones as large as Azure or Visual Studio - thousands of people work so hard! Engineers, Program Managers, Testers, Community Members...people from all over working on each release so a select few of us get on stage to share it with you! And who am I to have this privilege?

Don't think for a second that when you're giving a technical talk that it's about you. You're sitting on a stack of software you had a small part in writing and standing on the shoulders of giants of generations of engineers and creators who came before you. When I do talks where I'm representing a huge group I reflect on this with gratitude.

If you work on any of the products I'm showing, know this; I may be one of the talking heads or a visible grand marshal but we work for you and we never forget it. My job at events like this is to make the product - your work - shine. I take that job very seriously, and if it looks like it's effortless, that's because of the massive amount of work we put into the presentation. Hours of practice, story arcs, literally blocking movement as if it were a play or stage show, camera work, and transitions. Deeping understanding what we're presenting and why it's awesome and why you're proud of it.

I'm writing this note for all the other advocates and visible community members.

What a joy and privilege it is to stand up and represent our co-workers and follow engineers and to tell the stories of the things they build!

Let that privilege both put motivation in you and propel you forward to present their work - your teams' work.

I appreciate you all, both inside and out, and I'll will do my best to represent your team and the larger community to the best of my ability.


Sponsor: Looking for a new challenge? Hired is the leading job marketplace that connects engineers to their next challenge. Let Hired connect you to your next challenge. Sign up now.



© 2018 Scott Hanselman. All rights reserved.
     

Announcing WPF, WinForms, and WinUI are going Open Source

$
0
0

Buckle up friends! Microsoft is open sourcing WPF, Windows Forms (winforms), and WinUI, so the three major Windows UX technologies are going open source! All this is happening on the same day as .NET Core 3.0 Preview 1 is announced. Madness! ;)

.NET Core 3 is a major update which adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). Note that .NET Core 3 continues to be open source and runs on Windows, Linux, Mac, in containers, and in the cloud. In the case of WPF/WinForms/etc you'll be able to create apps for Windows that include (if you like) their own copy of .NET Core for a clean side-by-side install and even faster apps at run time. The Windows UI XAML Library (WinUI) is also being open sourced AND you can use these controls in any Windows UI framework.

That means your (or my!) WPF/WinForms/WinUI apps can all use the same controls if you like, using XAML Islands. I could take the now 10 year old BabySmash WPF app and add support for pens, improved touch, or whatever makes me happy!

WPF and Windows Forms projects are run under the .NET Foundation which also announced changes today and the community will guide foundation operations. The .NET Foundation is also changing its governance model by increasing the number of board members to 7, with just 1 appointed by Microsoft. The other board members will be voted on by the community! Anyone who has contributed to a .NET Foundation project can run, similar to how the Gnome Foundation works! Learn more about the .NET Foundation here.

On the runtime and versioning side, here's a really important point from the .NET blog that's worth emphasizing IMHO:

Know that if you have existing .NET Framework apps that there is not pressure to port them to .NET Core. We will be adding features to .NET Framework 4.8 to support new desktop scenarios. While we do recommend that new desktop apps should consider targeting .NET Core, the .NET Framework will keep the high compatibility bar and will provide support for your apps for a very long time to come.

I think of it this way. If you’ve got an existing app that you’re happy with, there is no reason to port this to .NET Core. Microsoft will support the .NET Framework for a very long time, given that it’s a part of Windows. But post .NET Framework 4.8. new features will usually only become available in .NET Core because Microsoft is drastically reducing the risk and thus rate of change for .NET Framework. So if you’re building a new app or you’re actively evolving an existing app you should really start looking at .NET Core. Porting to .NET Core certainly isn’t free, but it offers many benefits, such as better performance, XCOPY deployment for the framework itself, and feature set that is growing fast, thanks to open source. Choose the strategy that makes sense for your project and/or business.

I don't want to hear any of this "this is dead, only use that" nonsense. We just open sourced WinForms and have already taken Pull Requests. WinForms has been updated for 4k+ displays! WPF is open source, y'all! Think about the .NET Standard and how you can run standard libraries on .NET Framework, .NET Core, and Mono - or any ".NET" that's out there. Mono is enabling running .NET Standard libraries via WebAssembly. To be clear - your browser is now .NET Standard capable! There are open source projects like https://platform.uno/ and Avalonia and Ooui taking .NET in new and interesting places. Blazor makes Web UIs in .NET with (preview/experimental) client support with Web Assembly and server support included in .NET 3.0 with Razor Components. Only good things are coming, my friends!

.NET ALL THE THINGS

.NET Core runs on Raspberry Pi and ARM processors! .NET Core supports serial points, IoT devices, and there's even a System.Device.GPIO (General Purpose I/O) package! Go explore https://github.com/dotnet/iot to really get your head around how much cool stuff is happening in the .NET space.

I want to encourage you to go check out Matt Warren's extremely well-researched post "Open Source .NET - 4 years later" to get a real visceral sense of how far we've come as a community. You'll be amazed!

Now, go play!

Enjoy.


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

How to remove words from the Windows Autocorrect Spell Check Dictionary

$
0
0

Well crap. I was typing really fast and got a squiggly, so I right-clicked on it and rather than selecting the correct word from the autocorrect dictionary, I clicked Add To Dictionary.

I added the MISSPELLED WORD to the Dictionary! Now Windows is suggesting that I spell this word (and others) wrong in all apps.

At this point I also realized that I had no idea how to REMOVE a word from the Windows Spell Check Dictionary. However, I do know that Windows isn't a black box so there must be a dictionary somewhere. It's gotta be a file or a registry key or something, right?

It's even easier than I thought it would be. The Windows 10 custom dictionaries are at %AppData%\Microsoft\Spelling\

The Windows 10 custom dictionaries are at %AppData%\Microsoft\Spelling\

I just opened the default.dic file in Notepad and removed the misspelled word.

Opening default.dic in Notepad

Whew. I can't tell you how many wrong words have found there way in there over the years. Hope this helps you in some small way.


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.


© 2018 Scott Hanselman. All rights reserved.
     

How to set up ASP.NET Core 2.2 Health Checks with BeatPulse's AspNetCore.Diagnostics.HealthChecks

$
0
0

Availability TestsASP.NET Core 2.2 is out and released and upgrading my podcast site was very easy. Once I had it updated I wanted to take advantage of some of the new features.

For example, I have used a number of "health check" services like elmah.io, pingdom.com, or Azure's Availability Tests. I have tests that ping my website from all over the world and alert me if the site is down or unavailable.

I've wanted to make my Health Endpoint Monitoring more formal. You likely have a service that does an occasional GET request to a page and looks at the HTML, or maybe just looks for an HTTP 200 Response. For the longest time most site availability tests are just basic pings. Recently folks have been formalizing their health checks.

You can make these tests more robust by actually having the health check endpoint check deeper and then return something meaningful. That could be as simple as "Healthy" or "Unhealthy" or it could be a whole JSON payload that tells you what's working and what's not. It's up to you!

image

Is your database up? Maybe it's up but in read-only mode? Are your dependent services up? If one is down, can you recover? For example, I use some 3rd party back-end services that might be down. If one is down I could used cached data but my site is less than "Healthy," and I'd like to know. Is my disk full? Is my CPU hot? You get the idea.

You also need to distinguish between a "liveness" test and a "readiness" test. Liveness failures mean the site is down, dead, and needs fixing. Readiness tests mean it's there but perhaps isn't ready to serve traffic. Waking up, or busy, for example.

If you just want your app to report it's liveness, just use the most basic ASP.NET Core 2.2 health check in your Startup.cs. It'll take you minutes to setup.

// Startup.cs

public void ConfigureServices(IServiceCollection services)
{
services.AddHealthChecks(); // Registers health check services
}

public void Configure(IApplicationBuilder app)
{
app.UseHealthChecks("/healthcheck");
}

Now you can add a content check in your Azure or Pingdom, or tell Docker or Kubenetes if you're alive or not. Docker has a HEALTHCHECK directive for example:

# Dockerfile

...
HEALTHCHECK CMD curl --fail http://localhost:5000/healthcheck || exit

If you're using Kubernetes you could hook up the Healthcheck to a K8s "readinessProbe" to help it make decisions about your app at scale.

Now, since determining "health" is up to you, you can go as deep as you'd like! The BeatPulse open source project has integrated with the ASP.NET Core Health Check API and set up a repository at https://github.com/Xabaril/AspNetCore.Diagnostics.HealthChecks that you should absolutely check out!

Using these add on methods you can check the health of everything - SQL Server, PostgreSQL, Redis, ElasticSearch, any URI, and on and on. Just add the package you need and then add the extension you want.

You don't usually want your health checks to be heavy but as I said, you could take the results of the "HealthReport" list and dump it out as JSON. If this is too much code going on (anonymous types, all on one line, etc) then just break it up. Hat tip to Dejan.

app.UseHealthChecks("/hc",

new HealthCheckOptions {
ResponseWriter = async (context, report) =>
{
var result = JsonConvert.SerializeObject(
new {
status = report.Status.ToString(),
errors = report.Entries.Select(e => new { key = e.Key, value = Enum.GetName(typeof(HealthStatus), e.Value.Status) })
});
context.Response.ContentType = MediaTypeNames.Application.Json;
await context.Response.WriteAsync(result);
}
});

At this point my endpoint doesn't just say "Healthy," it looks like this nice JSON response.

{

status: "Healthy",
errors: [ ]
}

I could add a Url check for my back end API. If it's down (or in this case, unauthorized) I'll get this a nice explanation. I can decide if this means my site is unhealthy or degraded.  I'm also pushing the results into Application Insights which I can then query on and make charts against.

services.AddHealthChecks()

.AddApplicationInsightsPublisher()
.AddUrlGroup(new Uri("https://api.simplecast.com/v1/podcasts.json"),"Simplecast API",HealthStatus.Degraded)
.AddUrlGroup(new Uri("https://rss.simplecast.com/podcasts/4669/rss"), "Simplecast RSS", HealthStatus.Degraded);

Here is the response, cool, eh?

{

status: "Degraded",
errors: [
{
key: "Simplecast API",
value: "Degraded"
},
{
key: "Simplecast RSS",
value: "Healthy"
}
]
}

This JSON is custom, but perhaps I could use the a built in writer for a free reasonable default and then hook up a free default UI?

app.UseHealthChecks("/hc", new HealthCheckOptions()

{
Predicate = _ => true,
ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
});

app.UseHealthChecksUI(setup => { setup.ApiPath = "/hc"; setup.UiPath = "/healthcheckui";);

Then I can hit /healthcheckui and it'll call the API endpoint and I get a nice little bootstrappy client-side front end for my health check. A mini dashboard if you will. I'll be using Application Insights and the API endpoint but it's nice to know this is also an option!

If I had a database I could check one or more of those for health well. The possibilities are endless and up to you.

public void ConfigureServices(IServiceCollection services)

{
services.AddHealthChecks()
.AddSqlServer(
connectionString: Configuration["Data:ConnectionStrings:Sql"],
healthQuery: "SELECT 1;",
name: "sql",
failureStatus: HealthStatus.Degraded,
tags: new string[] { "db", "sql", "sqlserver" });
}

It's super flexible. You can even set up ASP.NET Core Health Checks to have a webhook that sends a Slack or Teams message that lets the team know the health of the site.

Check it out. It'll take less than an hour or so to set up the basics of ASP.NET Core 2.2 Health Checks.


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Useful ASP.NET Core 2.2 Features

$
0
0

Earlier this week I talked about how I upgraded my podcast site to ASP.NET Core 2.2 and added Health Check features fairly easily. There's a ton of new features and so far it's been great running on my site with no issues. Upgrading from 2.1 is straightforward.

I wanted to look at just a few of these that I found particularly interesting.

You can get a very significant performance boost by moving ASP.NET Core in process with IIS.

Using in-process hosting, an ASP.NET Core app runs in the same process as its IIS worker process. This removes the performance penalty of proxying requests over the loopback adapter when using the out-of-process hosting model.

After the IIS HTTP Server processes the request, the request is pushed into the ASP.NET Core middleware pipeline. The middleware pipeline handles the request and passes it on as an HttpContext instance to the app's logic. The app's response is passed back to IIS, which pushes it back out to the client that initiated the request.

HTTP Client performance improvements are quite significant as well.

Some significant performance improvements have been made to SocketsHttpHandler by improving the connection pool locking contention. For applications making many outgoing HTTP requests, such as some Microservices architectures, throughput should be significantly improved. Our internal benchmarks show that under load HttpClient throughput has improved by 60% on Linux and 20% on Windows. At the same time the 90th percentile latency was cut down by two on Linux. See Github #32568 for the actual code change that made this improvement.

HTTP/2 is enabled by default. HTTP/2 may be sneaking up on you as for the most part "it just works." In ASP.NET Core's Kestral web server HTTP/2 is enabled by default over HTTPS. You can see here at both the command line and in Chrome I'm using HTTP/2 locally.

HTTP/2 locally

Here's Chrome. Note the "h2."

HTTP/2 in Chrome

Note that you'll only be able to get HTTP/2 when ALPN (Application-Layer Protocol Negotiation) is available. That means ALPN is supported on:

All in all, it's a solid release. Go check out the announcement post on ASP.NET Core 2.2 for even more detail!


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

Enjoy some DOS Games this Christmas with DOSBox

$
0
0

I blogged about DOSBox five years ago! Apparently I get nostalgic around this time of year when I've got some downtime. Here's what I had to say:

I was over at my parents' house for the Christmas Holiday and my mom pulled out a bunch of old discs and software from 20+ years ago. One gaame was "Star Trek: Judgment Rites" from 1995. I had the CD-ROM Collector's edition with all the audio from the original actors, not just the floppy version with subtitles. It's a MASSIVE 23 megabytes of content!

DOSBox has ben providing joy in its reliable service for over 16 years and you should go check it out RIGHT NOW, if only to remind yourself of how good we have it now. DOSBox is an x86 and DOS Emulator - not a virtual machine. It emulates classic hardware like Sound Blaster cards and older graphics standards like VGA/VESA.

If a game runs too fast, you can slow it down by pressing Ctrl-F11. You can speed up games by pressing Ctrl-F12. DOSBox’s CPU speed is displayed in its title bar. Type "intro special" for a full hotkey list.

Note that DOSBox will start up TINY if you have a 4k monitor. There's a few things to you can do about it. First, ALT-ENTER will toggle DOSBox into full screen mode, although when you return to Windows your windows may find themselves resized.

For Windowed mode, I used these settings. You can't scale the window when output=surface, so experiment with settings like these:

windowresolution=1280 x 1024
output=ddraw

These are only the most basic initial changes you'll want to make. There's an enthusiastic community of DOSBox users that are dedicated to making it as perfect as possible. I enjoy this reddit thread debating "pixel perfect" settings. There's also a number of forks and custom builds of DOSBox out there that impose specific settings so be sure to explore and pick the one that makes you happy. It's also important to understand that aspect ratios and the size and squareness of a pixel will all change how your game looks.

I tend to agree with them that I don't want a blurry scaler. I want the dots/pixels as they are, simply made larger (2x, 3x, 4x, etc) with crisp edges at a reasonable aspect ratio. An interesting change you can make to your .conf file is the "forced" keyword after your scaler choice.

Here is scaler=normal3x (no forced)

Blurry DOSBox

and there's scaler-normal3x forced

The instructions say that forced means "the scaler will be used even if the result might not be desired." In this case, it forces the use of the scaler in text mode. Your mileage may vary, but the point is there's options and it's great fun. You may want scanlines or you may want crisp pixels.

I've found it all depends on what your memory of DOS is and what you're trying to do is to change the settings to best visualize that memory. My (broken) memory is of CRISP pixels.

Crisp DOSBox

Amazing difference!
The first thing you should do is add lines like these to the bottom of your dosbox.conf. You'll want your virtual C: drive mounted every time DOSBox starts up!

[autoexec]
# Lines in this section will be run at startup.
MOUNT C: C:\Users\scott\Dropbox\DosBox

If you want to play classic games but don't want the hassle (or questionable legality) of other ways, I'd encourage you to spend some serious time at https://www.gog.com. They've packaged up a ton of classic games so they "just work."

Bard's Tale 3
Space Quest 3

Enjoy! And THANK YOU to the folks that work on DOSBox for their hard work. It shows and we appreciate it.


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     

The Fun of Finishing - Exploring old games with Xbox Backwards Compatibility

$
0
0

Star Wars: KOTORI'm on vacation for the holidays and I'm finally getting some time to play video games. I've got an Xbox One X that is my primary machine, and I also have a Nintendo Switch that is a constant source of joy. I recently also picked up a very used original PS4 just to play Spider-man but expanded to a few other games as well.

One of the reasons I end up using my Xbox more than any of my other consoles is its support for Backwards Compatibility. Backwards Compat is so extraordinary that I did an entire episode of my podcast on the topic with one of the creators.

The general idea is that an Xbox should be able to play Xbox games. Let's take that even further - Today's Xbox should be able to play today's Xbox games AND yesterday's...all the way back to the beginning. One more step further, shall well? Today's Xbox should be able to play all Xbox games from every console generation and they'll look better than you imagined them!

The Xbox One X can take 720p games and upscale them to 4k, use higher quality textures, and some games like Final Fantasy XIII have even been fully remastered but you still use the original disc! I would challenge you to play the original Red Dead Redemption on an Xbox One X and not think it was a current generation game. I recently popped in a copy of Splinter Cell: Conviction and it automatically loaded a 5-year-old save game from the cloud and I was on my way. I played Star Wars: KOTOR - an original Xbox game - and it looks amazing.

Red Dead Redemption

A little vacation combined with a lot of backwards compatibility has me actually FINISHING games again. I've picked up a ton of games this week and finally had that joy of finishing them. Each game I started up that had a save game found me picking up 60% to 80% into the game. Maybe I got stuck, perhaps I didn't have enough time. Who knows? But I finished. Most of these finishings were just 3 to 5 hours of pushing from my current (old, original) save games.

  • Crysis 2 - An Xbox 360 game that now works on an Xbox One X. I was halfway through and finished it up in a few days.
  • Crysis 3 - Of course I had to go to the local retro game trader and pick up a copy for $5 and bang through it. Crysis is a great trilogy.
  • Dishonored - I found a copy in my garage while cleaning. Turns out I had a save game in the Xbox cloud since 2013. I started right from where I left off. It's so funny to see a December 2018 save game next to a 2013 save game.
  • Alan Wake - Kind of a Twin Peaks type story, or a Stephen King with a flashlight and a gun. Gorgeous game, and very innovative for the time.
  • Mirror's Edge - Deceptively simple graphics that look perfect on 4k. This isn't just upsampling, to be clear. It's magic.
  • Metro 2033 - Deep story and a lot of world building. Oddly I finished Metro: Last Light a few months back but never did the original.
  • Sunset Overdrive - It's so much better than Jet Set Radio Future. This game has a ton of personality and they recorded ALL the lines twice with a male and female voice. I spoke to the voiceover artist for the female character on Twitter and I really think her performance is extraordinary. I had so much fun with this game that now the 11 year old is starting it up. An under-respected classic.
  • Gears of War Ultimate - This is actually the complete Gears series. I was over halfway through all of these but never finished. Gears are those games where you play for a while and end up pausing and googling "how many chapters in gears of war." They are long games. I ended up finishing on the easiest difficulty. I want a story and I want some fun but I'm not interested in punishment.
  • Shadow Complex - Also surprisingly long, I apparently (per my save game) gave up with just an hour to go. I guess I didn't realize how close I was to the end?

I'm having a blast (while the spouse and kids sleep, in some cases) finishing up these games. I realize I'm not actually accomplishing anything but the psychic weight of the unfinished is being lifted in some cases. I don't play a lot of multiplayer games as I enjoy a story. I read a ton of books and watch a lot of movies, so I look for a tale when I'm playing video games. They are interactive books and movies for me with a complete story arc. I love it when the credits role. A great single player game with a built-up universe is as satisfying (or more so) as finishing a good book.

What are you playing this holiday season? What have you rediscovered due to Backwards Compatibility?


Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



© 2018 Scott Hanselman. All rights reserved.
     
Viewing all 1148 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>