Quantcast
Channel: Scott Hanselman's Blog
Viewing all 1148 articles
Browse latest View live

ASP.NET Single Page Applications Angular Release Candidate

$
0
0

I was doing some Angular then remembered that the ASP.NET "Angular Project Template" has a release candidate and is scheduled to release sometime soon in 2018.

Starting with just a .NET Core 2.0 install plus Node v6 or later, I installed the updated angular template. Note that this isn't the angular/react/redux templates that came with .NET Core's base install.

I'll start by adding the updated SPA (single page application) template:

dotnet new --install Microsoft.DotNet.Web.Spa.ProjectTemplates::2.0.0-rc1-final

Then from a new directory, just

dotnet new angular

Then I can open it in either VSCode or Visual Studio Community (free for Open Source). If you're interested in the internals, open up the .csproj project file and note the checks for ensuring node is install, running npm, and running WebPack.

If you've got the Angular "ng" command line tool installed you can do the usual ng related stuff, but you don't need to run "ng serve" because ASP.NET Core will run it automatically for you.

I set development mode with "SET ASPNETCORE_Environment=Development" then do a "dotnet build." It will also restore your npm dependencies as part of the build. The client side app lives in ./ClientApp.

C:\Users\scott\Desktop\my-new-app> dotnet build

Microsoft (R) Build Engine version 15.5 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Restore completed in 73.16 ms for C:\Users\scott\Desktop\my-new-app\my-new-app.csproj.
Restore completed in 99.72 ms for C:\Users\scott\Desktop\my-new-app\my-new-app.csproj.
my-new-app -> C:\Users\scott\Desktop\my-new-app\bin\Debug\netcoreapp2.0\my-new-app.dll
v8.9.4
Restoring dependencies using 'npm'. This may take several minutes...

"dotnet run" then starts the ng development server and ASP.NET all at once.

My ASP.NET Angular Application

If we look at the "Fetch Data" menu item, you can see and example of how Angular and open source ASP.NET Core work together. Here's the Weather Forecast *client-side* template:

<p *ngIf="!forecasts"><em>Loading...</em></p>


<table class='table' *ngIf="forecasts">
<thead>
<tr>
<th>Date</th>
<th>Temp. (C)</th>
<th>Temp. (F)</th>
<th>Summary</th>
</tr>
</thead>
<tbody>
<tr *ngFor="let forecast of forecasts">
<td>{{ forecast.dateFormatted }}</td>
<td>{{ forecast.temperatureC }}</td>
<td>{{ forecast.temperatureF }}</td>
<td>{{ forecast.summary }}</td>
</tr>
</tbody>
</table>

And the TypeScript:

import { Component, Inject } from '@angular/core';

import { HttpClient } from '@angular/common/http';

@Component({
selector: 'app-fetch-data',
templateUrl: './fetch-data.component.html'
})
export class FetchDataComponent {
public forecasts: WeatherForecast[];

constructor(http: HttpClient, @Inject('BASE_URL') baseUrl: string) {
http.get<WeatherForecast[]>(baseUrl + 'api/SampleData/WeatherForecasts').subscribe(result => {
this.forecasts = result;
}, error => console.error(error));
}
}

interface WeatherForecast {
dateFormatted: string;
temperatureC: number;
temperatureF: number;
summary: string;
}

Note the URL. Here's the back-end. The request is serviced by ASP.NET Core. Note the interface as well as the TemperatureF server-side conversion.

[Route("api/[controller]")]

public class SampleDataController : Controller
{
private static string[] Summaries = new[]
{
"Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};

[HttpGet("[action]")]
public IEnumerable<WeatherForecast> WeatherForecasts()
{
var rng = new Random();
return Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
DateFormatted = DateTime.Now.AddDays(index).ToString("d"),
TemperatureC = rng.Next(-20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
});
}

public class WeatherForecast
{
public string DateFormatted { get; set; }
public int TemperatureC { get; set; }
public string Summary { get; set; }

public int TemperatureF
{
get
{
return 32 + (int)(TemperatureC / 0.5556);
}
}
}
}

Pretty clean and straightforward. Not sure about the Date.Now, but for the most part I understand this and can see how to extend this. Check out the docs on this release candidate and also note that this included updated React and Redux templates as well!


Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!


© 2017 Scott Hanselman. All rights reserved.
     

Exploring the Azure IoT Arduino Cloud DevKit

$
0
0

Someone gave me an Azure IoT DevKit, and it was lovely timing as I'm continuing to learn about IoT. As you may know, I've done a number of Arduino and Raspberry Pi projects, and plugged them into various and sundry clouds, including AWS, Azure, as well as higher-level hobbyist systems like AdaFruit IO (which is super fun, BTW. Love them.)

The Azure IoT DevKit is brilliant for a number of reasons, but one of the coolest things is that you don't need a physical one...they have an online simulator! Which is very Inception. You can try out the simulator at https://aka.ms/iot-devkit-simulator. You can literally edit your .ino Arduino files in the browser, connect them to your Azure account, and then deploy them to a virtual DevKit (seen on the right). All the code and how-tos are on GitHub as well.

When you hit Deploy it'll create a Free Azure IoT Hub. Be aware that if you already have a free one you may want to delete it (as you can only have a certain number) or change the template as appropriate. When you're done playing, just delete the entire Resource Group and everything within it will go away.

The Azure IoT DevKit in the browser is amazing

Right off the bat you'll have the code to connect to Azure, get tweets from Twitter, and display them on the tiny screen! (Did I mention there's a tiny screen?) You can also "shake" the virtual IoT kit, and exercise the various sensors. It wouldn't be IoT if it didn't have sensors!

It's a tiny Arduino device with a screen!

This is just the simulator, but it's exactly like the real MXChip IoT DevKit. (Get one here) They are less than US$50 and include WiFi, Humidity & Temperature, Gyroscope & Accelerometer, Air Pressure, Magnetometer, Microphone, and IrDA, which is ton for a small dev board. It's also got a tiny 128x64 OLED color screen! Finally, the board also can go into AP mode which lets you easily put it online in minutes.

I love these well-designed elegant little devices. It also shows up as an attached disk and it's easy to upgrade the firmware.

Temp and Humidity on the Azure IoT DevKit

You can then dev against the real device with free VS Code if you like. You'll need:

  • Node.js and Yarn: Runtime for the setup script and automated tasks.
  • Azure CLI 2.0 MSI - Cross-platform command-line experience for managing Azure resources. The MSI contains dependent Python and pip.
  • Visual Studio Code (VS Code): Lightweight code editor for DevKit development.
  • Visual Studio Code extension for Arduino: Extension that enables Arduino development in Visual Studio Code.
  • Arduino IDE: The extension for Arduino relies on this tool.
  • DevKit Board Package: Tool chains, libraries, and projects for the DevKit
  • ST-Link Utility: Essential tools and drivers.

But this Zip file sets it all up for you on Windows, and head over here for Homebrew/Mac instructions and more details.

I was very impressed with the Arduino extension for VS Code. No disrespect to the Arduino IDE but you'll likely outgrow it quickly. This free add on to VS Code gives you intellisense and integration Arduino Debugging.

Once you have the basics done, you can graduate to the larger list of projects at https://microsoft.github.io/azure-iot-developer-kit/docs/projects/ that include lots of cool stuff to try out like a cloud based Translator, Door Monitor, and Air Traffic Control Simulator.

All in all, I was super impressed with the polish of it all. There's a LOT to learn, to be clear, but this was a very enjoyable weekend of play.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Building 0verkill on Windows 10 Subsystem for Linux - 2D ASCII art deathmatch game

$
0
0

I'm a big fan of the Windows Subsystem for Linux. It's real Linux that runs real user-mode ELF binaries but it's all on Windows 10. It's not running in a Virtual Machine. I talk about it and some of the things you should be aware of when sharing files between files systems in this YouTube video.

WHAT IS ALL THIS LINUX ON WINDOWS STUFF? Here's a FAQ on the Bash/Windows Subsystem for Linux/Ubuntu on Windows/Snowball in Hell and some detailed Release Notes. Yes, it's real, and it's spectacular. Can't read that much text? Here's a video I did on Ubuntu on Windows 10.

You can now install not only Ubuntu from the Windows Store (make sure you run this first from a Windows PowerShell admin prompt) - "Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux"

I have set up a very shiny Linux environment on Windows 10 with lovely things like tmux and Midnight Commander. The bash/Ubuntu/WSL shell shares the same "console host" (conhost) as PowerShell and CMD.exe, so as the type adds new support for fonts, colors, ANSI, etc, every terminal gets that new feature.

I wanted to see how far this went. How Linuxy is Linux on Windows? How good is the ANSI/ASCII support in the console on Windows 10? Clearly the only real way to check this out would be to try to build 0verkill. 0verkill is a client-server 2D deathmatch-like game in ASCII art. It has both client and server and lots of cool features. Plus building it would exercise the system pretty well. It's also nearly 20 years old which is fun.

PRO TIP: Did you know that you can easily change your command prompt colors globally with the new free open source ColorTool? You can easily switch to solarized or even color-blind schemes for deuteranopia.

There's a fork of the 0verkill code at https://github.com/hackndev/0verkill so I started there. I saw that there was a ./rebuild script that uses aclocal, autoconf, configure, and make, so I needed to apt in some stuff.

sudo apt-get install build-essential autotools-dev automake
sudo apt-get install libx11-dev
sudo apt-get install libxpm-dev

Then I built it with ./rebuild and got a TON of warnings. Looks like this rather old code does some (now, in the modern world) questionable things with fprintf. While I can ignore the warnings, I decided to add -Wno-format-security to the CFLAGS in Makefile.in in order to focus on any larger errors I might run into.

Changing CFLAGS in Makefile.in

I then rebuild again, and get a few warnings, but nothing major. Nice.

Building 0verkill

I run the server locally with ./server. This allows you to connect multiple clients, although I'll just be connecting locally, it's nice that the networking works.

$ ./server
11. 1.2018 14:01:42  Running 0verkill server version 0.16
11. 1.2018 14:01:42  Initialization.
11. 1.2018 14:01:42  Loading sprites.
11. 1.2018 14:01:42  Loading level "level1"....
11. 1.2018 14:01:42  Loading level graphics.
11. 1.2018 14:01:42  Loading level map.
11. 1.2018 14:01:42  Loading level objects.
11. 1.2018 14:01:42  Initializing socket.
11. 1.2018 14:01:42  Installing signal handlers.
11. 1.2018 14:01:42  Game started.
11. 1.2018 14:01:42  Sleep

Next, run the client in another bash/Ubuntu console window (or a tmux pane) with ./0verkill.

Awesome. Works great, scales with the window size, ASCII and color looks great.

Alone in 0verkill

Now I just need to find someone to play with me...


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Running BBS Door Games on Windows 10 with GameSrv, DOSBox, plus telnet fun with WSL

$
0
0

Example of a BBS home screenI continue to enjoy seeing what can be done with WSL (Windows Subsystem for Linux) but even more fun is combining CMD.exe (the Windows console), Ubuntu on Windows (WSL), and DOSBox (an x86 emulator that lets you run OLD programs in original DOS that no longer run natively on Windows). What kind of cool stuff can I do today?

I did a lightning talk this week at NDC London where I started with a text file that included a CR/LF, Git autocrlf, then talked about typewriters, what a Carriage really is, then the Teletype Model 33, the Altair 8800, the ASCII chart, then ANSI art, and finally moved on to BBS's and BBS Door Games. I'll do a more extensive post later and I'm going to turn this into a full conference talk, but for the demo I ran a few BBS Door Games under Windows 10. Why? Because it's awesome and history is lovely.

You can try setting up what I'm going to describe in this post, or you can try telnet'ing to a BBS like the CaveBBS here: telnet://cavebbs.homeip.net. You might also want to telnet://towel.blinkenlights.nl for ASCII-based Star Wars! Originally we would call (like literally dial-up one to one) a BBS but ubiquitous internet added telnet as a nice option that persists today. Door Games were ASCII/ANSI games that the BBS would shell out to, passing the connection over. When the game extended, the BBS picked up the phone and kept the connection. TradeWars is/was the most well-known Door Game and we'd play it for months. TradeWars was the Elite Dangerous of the BBS set. ;)

So the question is, could we play DOS-based 16-bit Door Games today? Yes.

GameSrv can be used to bring your old DOS based BBS server into the new millennium. It'll act as a front-end and accept telnet connections before passing them off to the DOS BBS software.

Rick Parrish has a BBS door game server for Windows and Linux that he's written in open source C# called GameSrv. You may know Rick from his fTelnet browser based app. fTelnet lets you connect to Bulletin Board Systems from the comfort of your browser. A locally-run cross-platform option for connecting to BBS's is SyncTERM.

Go get SyncTerm, Rick's GameSrv Full, as well as DOSBox 0.73. You'll be able to telnet into your BBS with Ubuntu's (Bash on Windows/WSL) built in Telnet but you may run into issues with local echo (you'll want to Ctrl-] then type "mode char") as well as some missing extended ASCII characters that BBS's loved to draw menus with. While WSL's ANSI support is good, these missing characters cause hiccups. SyncTerm is totally custom with a whole host of Bitmapped fonts and a lot of custom work around extended control sequences. You should also try out EtherTerm, Qodem and NetRunner as other cool BBS-friendly terminal options.

NOTE: One of the major challenges of the conhost (console host - the thing that paints the console window and hosts and paints text and handles keyboard input for bash/cmd/powershell) is that while there's lots of great console fonts, those fonts don't often include some of the obscure extended ASCII DOS characters that BBS's used to draw their menus. In order to find and render those glyphs, consoles will use "font fallback" and follow a tree of fonts, looking for the best glyph. As I understand it (I could be wrong) the current conhost - lovely as it is - doesn't yet support this. I think it should in order to be a complete and effective solution for telnet/ssh/etc.

Run GameSrvConsole and it will listen on localhost by default. You could setup a VM in Azure and run it there to make your BBS and Door Games available to the public if you'd like! Then, either "telnet localhost" or run "syncterm localhost" to access your BBS. You can "ALT-ENTER" to put Sync Term full screen, which is awesome.

Your new BBS

Once you sign up for your BBS with a new account, you can try out the Door Games menu. Selecting a Door Game will cause GameSrc to launch DOSBox and run the Door, while brokering the output back to your telnet client.

Running a Door Game - Ambroshia Test of Time

I'm heartened to see 20 year old BBS Door Games come to live on Windows 10. I'm going to see if my 10 and 12 year olds get a kick out of some of these adventure games.

An adventure door game

Finally, and slightly related, try "curl wttr.in/portland" in a large WSL (Linux) console. Lovely. I love stuff like this. Perhaps I'm easily impressed, or I just miss ASCII art.

image

Head over to https://github.com/rickparrish/GameSrv and STAR his GitHub Repository and if you like GameSrv and appreciate the work involved, you can donate to Rick as well. I have!


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

You got this! You know the fundamentals. You are a learner. Plus The Imposter's Handbook

$
0
0

Sometimes we all get overwhelmed. There's a million (no irony there) reasons to be overwhelmed today, to be sure. I got an email from a community member who was feeling like they hadn't kept up on the latest tech. Of course, anything you learn today will be obsolete tomorrow, right? I'm overwhelmed thinking of it!

I wrote a little thread about this on Twitter and I wanted to expand on it here.

Maybe you're a dev who's been keeping up and fresh on the latest since jump, or maybe you've been using the same reliable framework for your whole career.

It can be totally overwhelming when you "wake up" and look around and notice that you don't know NOUN.js or ASPNET 10 or the like. You feel like it's over, and you've missed the boat. I want to encourage you. You're a developer! You have a good base to build on!

You may not know today's JavaScript/Java/C# but you DO know JavaScript/Java/C#. Yes, the Internet moved your cheese while you were sleeping, but you DID grow. When talking to employers, emphasize the base of knowledge you bring. Frameworks come and go. Fundamentals remain.

I really recommend Rob Conery's "The Imposter's Handbook" as a great way to reinforce those fundamentals and core concepts.Rob has been programming for years but without a CS degree. This book is about all the things he learned and all the gaps that got filled in while he was overwhelmed.

Yes this is a squishy blog post, but sometimes that's what's needed. You are smart, you are capable. Look at the replies to the twitter thread and you'll see you are not alone. Your job as a programmer is to be the figure-outer.


Sponsor: Unleash a faster Python! Supercharge your applications performance on future forward Intel® platforms with The Intel® Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python Now!



© 2017 Scott Hanselman. All rights reserved.
     

Building a Raspberry Pi Car Robot with WiFi and Video

$
0
0

The SunFounder Raspberry Pi Car kit comes wtih everything you need except the 18650 batteries. You'll need to get those elsewhere.Last year I found a company called SunFounder that makes great Raspberry Pi-related kits and stuff. I got their Raspberry Pi 10" Touchscreen LCD and enjoyed it very much. This month I picked up the SunFounder PiCar 2.0 kit and built it with the kids. The kit includes everything you need except for the Raspberry Pi itself, a mini SD Card (the Pi uses that as  hard drive), and two 18650 rechargeable lithium batteries. Those batteries are enough to power both the Pi itself (so the car isn't tethered) as well as provide enough voltage to run the 3 servos AND motors to drive and steer the car around. You can also expand the car with other attachments like light sensors, line followers, and more.

The PiCar 2.0 includes the chassis, a nice USB WiFi adapter with antenna (one less thing to think about if you're using a Raspberry Pi  like me), a USB webcam for computer vision scenarios. It includes a TB6612 Motor Driver, PCA9685 PWM (Pulse Width Modulation) Servo Driver with 16 channels for future expansion. The kit also helpfully includes all the tools, screwdriver, wrenches, and bolts.

Preparing to build the SunFounder Raspberry Pi car

All the code for the SunFounder PiCar-V is on GitHub and while there can be a few hiccups with some of the English instructions, there are a bunch of YouTube videos and folks online doing the same thing so we had no trouble making the robot in a weekend.

Building a SunFounder Raspberry Pi Car

PRO TIP - Boot your new Raspberry Pi up with ssh enabled and already joined to your wifi

You'll need to use a tool like Etcher.io to burn a copy of the Raspbian operating system on to a mini SD card. I prefer to save time and avoid having to connect a new Raspberry Pi to HDMI and a mouse and keyboard, so I get the Pi onto my wifi network and enable SSH by copying these two files to the root of the file system of the freshly burned mini SD card. This will cause the Pi to automatically join your network when it boots up for the first time. Then I used Ubuntu on Windows 10 to ssh into the Pi and follow the instructions.

  • Make a 0 byte file called "ssh" and copy it to the root of the new PI disk
  • Make a file called "wpa_supplicant.conf" with just linefeeds at the end and make it look like this. Copy it to the root of the new PI disk.
country=US
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
    ssid="YOURWIFI"
    scan_ssid=1
    psk="yourwifipassword"
    key_mgmt=WPA-PSK
}

I like to use Notepad2 or Visual Studio Code to change the line endings of a file. You can see the CRLF or the LF in the status car and click it. Unix/Raspbian/Raspberry Pi likes just an LF (line feed) for the lineending, while Windows defaults to using CRLF (Carriage Return/Line Feed, or 0x13 0x10) for text files.

Changing the line endings to Unix

The default Raspberry username is pi and the default password is raspberry. You may want to change that. SunFounder has a decent "install_dependencies" script that you'll run on the Pi:

Installing the PiCar dependencies

Once you've built the PiCar you can ssh in and run their development server that gives you a little WebAPI to control the car. The SunFounder folks are pretty good at web development (less so with mobile apps) and have a nice Django app to control the PiCar.

Here's the view from the front camera of the PiCar as viewed through local website on port 8000. It's looking at my computer looking at itself. ;)

Viewing the PiCar camera through the Django website

You're able to control the PiCar from this web interface with the keyboard. You can move the car and steer with WASD, as well as move the head/camera independently. You will need to enter the settings area (upper right corner) and calibrate the back wheels direction. By default, one wheel may go the opposite direction because they can't be sure how you mounted them, so you'll need to reverse one wheel to ensure they both go in the same direction.

They also included a client application, also written in Python. On Windows you'll need to install Python, and when you run client.py you may get an error:

ImportError: No module named requests

You'll need to run "pip3 install requests" as that module isn't installed by default.

Additionally, Python apps aren't smart about High-DPI displays, so I went to C:\Users\scott\appdata\local\Programs\Python\Python36 and right click'ed the Python.exe and set the DPI setting to "System (Enhanced)" like this.

Overridding DPI settings to System (Enhanced)

The client app is best for "Zeroing out" the camera and wheels, in case they are favoring one side or the other.

image

All in all, building the SunFounder "Raspberry Pi Video Car Kit 2.0" with the kids was a great experience. The next step is to see what else we can do with it!

  • Add a speaker so it talks?
  • Add Alexa support so you can talk to it?
  • Make the car drive around and take pictures, then use Azure cognitive services to announce what it sees?
  • Or, as my little boys say, "add weapons and make another bot for it to fight!"

What do you think?

* I use Amazon affiliate links and appreciate it when you use them! It supports this blog and sometimes gives me enough money to buy gadgets like this!


Sponsor: Unleash a faster Python! Supercharge your applications performance on future forward Intel® platforms with The Intel® Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python Now!


© 2017 Scott Hanselman. All rights reserved.
     

How to set up Kubernetes on Windows 10 with Docker for Windows and run ASP.NET Core

$
0
0

Docker for Windows is really coming along nicely. They have both a Stable and Edge channel and the Edge (beta, experimental) one just included a lovely new feature - Kubernetes support. Per their docs, Kubernetes is only available in Docker for Windows 18.02 CE Edge. They set most everything up nicely and put Kubectl into your path and setup a context. If you use kubectl for other things - like your own Raspberry Pi Kubernetes Cluster, then you'll need to be aware of switching contexts. Same thing applies if you have one in the cloud, like the Kubernetes Cluster I made in Azure AKS.

Got Docker for Windows? If you have not yet installed Docker for Windows, see Install Docker for Windows for an explanation of stable and edge channels, system requirements, and download/install information.

It's easy to get started, just click "Enable Kubernetes" and Docker for Windows will download and start the images you need. I clicked "show system containers" because I like to see what's hidden from me, but you decide for yourself. Do be aware - there's a TON.

Enabling Kubernetes in Docker for Windows

By default, you won't get the Kubernetes Dashboard - of which I'm a fan - so you may want to install that. If you follow the default instructions (and you're a noob like me) then you'll likely end up with a Dashboard that is pretty locked down. It can be somewhat frustrating to get access to your own development dashboard, so I use the alternative (read: totally insecure) dashboard, like this:

C:\> kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml

I also like charts and graphs so I added these as well:

C:\> kubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
C:\> kubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
C:\> kubectl create -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml

I can access the dashboard by default by running "kubectl proxy" then visiting this http://localhost:8001/ui and I'll get redirected to the dashboard:

Kuberenetes Dashboard

Now I can run through all the cool Kubernetes tutorials like the Guestbook Kubernetes Sample Application from the convenience of my Windows 10 machine. (I'm running a SurfaceBook 2 on the current non-Beta Windows 10.)

There are a lot of nice samples on running .NET Core and ASP.NET Core apps with Docker up at https://github.com/dotnet/dotnet-docker-samples/

I made a quick ASP.NET Core app called kubeaspnetapp:

C:\Users\scott\Desktop>dotnet new razor -o kubeaspnetapp
The template "ASP.NET Core Web App" was created successfully.
...snip...
Restore succeeded.

Then added a two-stage build DockerFile that looks like this:

FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app
# copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# build runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "kubeaspnetapp.dll"]

And built and tagged the image with:

C:\Users\scott\Desktop\kubeaspnetapp>docker build -t kubeaspnetapp .

Then I create a quick Deployment that manages a Pod that runs the Container:

C:\Users\scott\Desktop\kubeaspnetapp>kubectl run kubeaspnetapp --image=kubeaspnetapp:v1 --port=80
deployment "kubeaspnetapp" created

Now I'll expose it to the "outside." Again, this is usually done with .yaml files but it's a good learning exercise and it's all local.

C:\Users\scott\Desktop\kubeaspnetapp>kubectl get deployments
NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kubeaspnetapp   1         1         1            1           1m
C:\Users\scott\Desktop\kubeaspnetapp>kubectl get pods
NAME                             READY     STATUS    RESTARTS   AGE
kubeaspnetapp-778f6d49bd-rct59   1/1       Running   0          1m
C:\Users\scott\Desktop\kubeaspnetapp>
C:\Users\scott\Desktop\kubeaspnetapp>
C:\Users\scott\Desktop\kubeaspnetapp>kubectl expose deployment kubeaspnetapp --type=NodePort
service "kubeaspnetapp" exposed
C:\Users\scott\Desktop\kubeaspnetapp>kubectl get services
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubeaspnetapp   LoadBalancer   10.98.234.67   <pending>     80:31756/TCP     5s
kubernetes      ClusterIP      10.96.0.1      <none>        443/TCP          1d

Then I'll hit http://127.0.0.1:31756 in my browser...note how that port is brokering to the internal port 80 where the app listens...and there's my ASP.NET Core app running locally on Kubernetes, set up with Docker for Windows. Nice.

My ASP.NET Core app running in Kubernetes local on my Windows 10 machine

Here's me getting the startup logs from that pod:

C:\Users\scott\>kubectl get pods
NAME                             READY     STATUS    RESTARTS   AGE
kubeaspnetapp-7fd7f7ffb9-8gnzd   1/1       Running   0          6m
C:\Users\scott\Dropbox\k8s for pi\aspnetcoreapp>kubectl logs kubeaspnetapp-7fd7f7ffb9-8gnzd
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.

Pretty cool. As all the tooling and things across Windows, Docker, Kubernetes, Visual Studio (all flavors) continues to get better and better, I can only imagine this experience will get better and better. I look forward to a time when I can freely mix containers from different OSs and easily push them all en masse to Azure.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Everyone should get a Dashcam

$
0
0

A clean dashcam installationI've put Dashcams in both my car and my wife's car. It's already captured two accidents: one where I was rear-ended and one where someone fell asleep as they were driving a few cars ahead of me on the freeway.

After these two experiences, I will never drive a car without a dashcam again. Case in point - being rear-ended. I was at a red light, it turned green, and as I accelerated I got nailed from behind, pushing me into the intersection. The gent jumped out and started yelling and waving his arms, saying I backed up (!), and I said, "I'm sorry, but I've got a dashcam both front and back." He got really quiet, and then we exchanged information. When I called the insurance company on Monday and told them I had not only Dashcam footage but that the cam recorded date and time, gps coordinates, speed and 1080p video both front and back, including the face and license plate of the other driver...I had a check that Thursday afternoon.

I was driving at night on I5 from Seattle to Portland and noticed a truck two or three (long) car lengths ahead of me start to drift, drift, drift off to the side...and then suddenly jerk hard to the left, cross all lanes of traffic and slam into the median in a shower of sparks and eventually balance on top of the center median. While I wasn't involved in the accident, I pulled over and dropbox'ed the video to the cops right there. The officer on duty said that dashcam footage made things 100% easier.

A cropped and somewhat compressed version of this video is embedded below, and also linked here. Now, it was late at night and I've cropped it, but you can see the car get "sleepy" and slowly float across all lanes to the right, hit the right side, then overcompensate and hit the center. This contradicted the driver's statement that he was hit by another car.

Disclaimer: This is older DR650 footage in the dead of night that's been cropped to remove identifying info. Check out this example Dashcam footage of a DR750 for a better sense of what to expect.

411gAnMBbaLI've put Blackvue Dashcams in both our cars. I put a Blackvue DR750S-2CH with Power Magic in car. A PowerMagic will power the dashcam while the car is parked, and catch anything that happens even if the car is off, and it will shut off if it detects that it's in any way discharging the 12V battery below a set voltage. I like the DR750 because it's 60fps 1080p on the front, and it can optionally buffer the video to memory so it's not beating on the SD card and shortening its life. It also has g-force and impact sensors, so as you get in the car it'll say (literally speak) "an impact was detected while in parking mode."

My wife didn't care about these more advanced features so she got the Blackvue DR650S-2CH. It's last year's model but still does 1080p front and back. There's a main wire that handles power for the main unit (either from a 9V cigarette lighter or the PowerMagic), then there's a long, long wire that you'll fish though the plastic panels of your car that will power and run the back camera.

It only took about two hours for me to install the camera, per car, and installation consisted mostly of hiding wires in the existing plastic panels and pushing the wires out of sight. The final look is very sanitary and requires zero maintenance.

The camera has wifi built-in and there's a free app to download. You connect your phone (whenever necessary) to the camera's wifi and download videos as needed. That's why it was super easy for me to Dropbox the footage without connecting to a PC. That said, there are Blackvue desktop apps that will show you maps with your position and speed and allow you to stitch footage together. You can also stamp date, time, speed, and custom text to the footage so it's embedded in the resulting MP4s.

I've had zero issues with my dashcams, and as I said, I'm sold. It's a no-brainer and frankly, it should be built into every car. I'll be installing a dashcam in whatever car my soon-to-be teenager drives, count on it.

Maybe you won't get into an accident (hopefully!) but you could catch a meteor on your dashcam!

* I use Amazon links to products. When you use them, you're supporting this blog! Thanks!


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Why should I care about Kubernetes, Docker, and Container Orchestration?

$
0
0

A person at work chatted me, commenting on my recent blog posts on the Raspberry Pi Kubernetes Clusters that are being built, and wondered "why should I care about Kubernetes or Docker or any of that stuff?"

WOCinTech Chat pic used under CC

Great question, and I'm figuring it out myself. There's lots of resources out there but none that spoke my language, so here's my thoughts and how I explain it.

"Hey, I have this great new blog app!"

"Fab, gimme!"

"Sure, first make sure you have this version of Windows/Linux, this version of .NET/Python/Node, and these prerequisites."

"Hang on, lemme call you next week when that's handled."

This is how software was built for years. Now let's deploy it.

"Here's the code/dlls/application zipped up."

"Lemme FTP/SFTP/Drag this from one Explorer Window to another."

"Is this version of that file set to this?"

"Wait, what?"

"Make sure that system/boss/dll/nounjs is version 4.5.4.1, they patched it."

"Ok, Imma shush* into production."

Again, we've all been there. It's 2018 and there's more folks doing this than you care to admit.

Enter Virtual Machines! Way better, right? Here's a USB key with a  file that is EVERYTHING you need. Handled.

"Forget that, use this. It's better than a computer, it's a Virtual Machine. But be aware, It doesn't know it's Virtual, so respect the lie."

"OK, email it to me."

"Well, it's 32 gigs. Lemme UPS it."

Your app is only 100 megs, and this VM is tens of gigs. Why does a 150 pound person need a 6000lb Hummer? Isolation, I guess.

"The app is getting more complex, but it's cool. There's four VMs now. One for the DB, one for Redis, and a front end one, and the shopping cart gets one. It's microservices!"

"I'm loving it."

"Here's a 2 TB drive."

Nice that we're breaking it up, but not so nice that we're getting bloated. Now we have to run apt upgrade/windows update on all these things and maintain them. Why drive a Hummer when I can get a Lyft?

"Ok I got them all running on this beefy machine under my desk."

"Cool, we're moving to the cloud."

"Sigh. I need to update all these connection strings and start uploading VMs."

"It'll be great. It's like a machine under your desk, except your desk is in the cloud."

"What's the cloud?"

"It's a server room you can't see. Basically it's the computers under your desk. But invisible."

Most VM infrastructure is pretty sloppy. It's hard coded IP addresses, it's poorly named VMs living in the same subnets, then we'll move them to the cloud (lift and shift!) but then they are still messy, but they're in the Cloud™, right?

"You know, all these VMs are heavy. I have to maintain and move a bunch of stuff that ISN'T the app. Containers are the way. Just define the app's base requirement and share everything else."

"I've been hearing about this. I can type "docker run hello-world" and on any machine it'll load the hello world image (based on Ubuntu) from a central hub and run it in a mostly isolated way. Guaranteed to work and run, even as time passes."

"Nice, because more and more parts of our app are in .NET Core on Linux, but there's also some Python and node."

"Yep and it'll all just run as the prerequisites are clearly listened in the container...and the prereqs are in fact references to other container images."

"It's containers all the way down."

Now the DB, Redis, the front end, and the shopping cart can call be defined in some simple text files. Rather than your Host OS (the main computer...the metal) loading up a bunch of Guest OS's (literally copies!) and then loading all the apps and prerequisites, you'll share  OSes, and when appropriate, the binaries and libraries.

"OK, now we have a bunch of containers running in Docker, but sometimes they go down or stop."

"Run them again?"

"It's more that that, we need to sometimes have 3 shopping cart containers, and other times we need 2 or more DB containers. Plus their IPs sometimes change"

"So we need something to keep them running, scale or auto-scale them, as well manage networking and naming/dns."

Enter a container orchestrator. There's Docker Swarm, Mesos/Marathon, Azure Service Fabric, and others, but for this post we'll use Kubernetes.

"So Kubernetes runs my containers, keeps them running, and helps manage the network?"

"Yes, and no. Parts of Kubernetes - or k8s, as cool people like me who have been using it for nearly 3 hours say - are part of the master components, like etcd for key value storage, and the kube-scheduler for selecting what node to run a "pod" on (a pod is cooler to say than container, but sometimes a pod is more than one container. Still, very cool.)

"I'll need to make a glossary."

"Darn tootin' you will."

Kubernetes has basically pluggable everything. Don't like their networking setup? There's literally over a dozen options. Want better charts and graphs? Whole world of options.

Just as one Dockerfile can explain declare what's needed to run an app, a Kubernetes YAML file describes not only the  containers, but the ports need, the number of replicas of each (think web farm), names, environment variables, and more. Here's a file that shows a front end, back end, and load balancer. Everything is there, connection strings become internal DNS lookups, ever service has a load balancer (if you like), and you can scale manually or auto-scale.

"Ok so why should I care?"

"A few reasons. In the past, to install our app I'd need to give you a Word document and a weekend. Now you type kubectl apple theapp.yaml and it's running in less than a minute."

"I'm still billing for the weekend."

Simply stated, we are at the beginning of a new phase of DevOps. One that is programmatic, elastic, and declarative. It's consistent and clear and modular.

I recommend you check out Julia Evens' "Reasons Kubernetes is cool" as well as reading up on how to make a Kubernetes cluster (and the management VMS are free) in Azure.

* I'm trying to make shush a thing. We don't Es Es Eaytch into machines! We shush in! It's pronounced somewhere between shush and shoosh. Make sure you throw in a little petit jeté when you say it.

* Pic used under CC


Sponsor: Unleash a faster Python Supercharge your applications performance on future forward Intel® platforms with The Intel® Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python* Now!



© 2017 Scott Hanselman. All rights reserved.
     

Surface Book 2 Developer Impressions and the Magic of USB-C

$
0
0

Surface Book 2 15"I recently got a updated laptop for work, a 15" Surface Book 2. It's quickly become my go-to machine, and I'm often finding myself using it more than my main desktop machine.

I considered myself reasonably familiar with the Surface product line as I bought a Surface Pro 3 a few years back for myself (not a work machine), but I am genuinely impressed with this Surface Book 2 - and that surprised me.

Here's a random list of a tips, tricks, things I didn't realize, and general feelings about the 15" Surface Book 2.

15" is a NICE size

After years of "Ultrabooks" I missed an actual high-powered desktop replacement laptop. It's just 4.2 lbs and it doesn't feel unwieldy at all.

There are TWO Surface Connect ports

Legit had no idea. You can charge and dock the tablet part alone.

There's a full sized SD card reader and a 3.5mm headphone jack

Which sadly is more than I can say for my iPhone 8+.

Having a 15" screen again makes me wonder how you 11" MacBook Air people can even concentrate.

3240 x 2160, (260 PPI) is a weird resolution to be sure, but it's a hell of a lot of pixels. It's a 15" retina display. 

The high resolution issues in Windows are 90% handled IMHO

I wrote about how running any DPI greater than 96dpi on Windows has historically sucked back in 2014, but literally every little Windows Update and Office update improves it. Only the oldest apps I run have any real issues. Even WinForms has been updated to support HighDPI so I have zero HighDPI issues in my daily life in 2018.

More RAM is always nice, but 16 gigs is today's sweet spot.

I have had zero RAM issues, and I'm running Kubernetes and lots of Docker containers along size VS, VS Code, Outlook, Office, Edge, Chrome, etc. Not one memory issue.

3.84 GHz or more

Battery Life and Management is WAY better

Power Mode SliderBattery Life on my Surface Pro 3 was "fine." You know? Fine. It wasn't amazing. Maybe 4-6 hours depending. However, the new Battery Slider on Windows 10 Creators Edition really makes simple and measurable difference. You can see the CPU GHz and brightness ratchet up and down. I set it to Best battery life and it'll go 8+ hours easy. CPU will hang out around 0.85 GHz and I can type all day at 40% brightness. Then I want to compile, I pull it up to bursts of 3.95 Ghz and take care of business.

HD Camera FTW

Having a 1080p front facing camera makes Skype/Zoom/etc calls excellent. I even used the default Camera app today during an on-stage presentation and someone later commented on how clear the camera was.

USB-C - I didn't believe it, but it's really a useful thing

Honestly, I wasn't feeling the hype around USB-C "one connector to rule them all," but today I was going to pull out some HDMI and Ethernet dongles here at the Webstock Conferences in New Zealand and they mentioned that all day they'd been using a Dell USB-C dock. I plugged in one cable - I didn't even use my Surface Power Brick - and got HDMI, a USB hub, Ethernet *and* power going back into the SurfaceBook. I think a solution like this will/should become standard for conferences. It was absolutely brilliant.

I have read some about concerns about charging the Surface Book 2 (and other laptops with USB-C) and there's a reddit thread with some detail. The follow says the Apple USB-C charger he bought charges the SurfaceBook at 72% of the speed of the primary charger. My takeaway is, ok, the included charger will always charge fastest, but this work not only work in a pinch, but it's a perfectly reasonable desk-bound or presenter solution. Just as my iPhone will charge - slowly - with aftermarket USB chargers. If you're interested in the gritty details, you can read about a conversation  that the Surface has with an Apple Charger over USB as they negotiate how much power to give and take. Nutshell, USB-C chargers that can do 60W will work but 90+W are ideal - and the Dell Dock handles this well which makes it a great flexible solution for conferences.

Also worth pointing out that there wasn't any perceptible "driver install" step. I got all the Dell Dock's benefits just by plugging it in at the conference. Note that I use a Surface Dock (the original/only one?) at home. In fact, the same Surface Dock I got for my personal Surface Pro 3 is in use by my new Surface Book 2. Presumably it doesn't output the full 95W that the Surface Book 2 can use, but in daily 10+ hour use it's been a non issue. There's articles about how you can theoretically drain a Surface Book 2's batteries if you're using more power than it's getting from a power supply, but I haven't had that level of sustained power usage. Haven't needed to give it a thought.

The i7 has a NVidia 1060 with 6 gigs of RAM, so you can install GeForce and run apps on the Discrete GPU

You can go in and control which apps run on which GPU (for power savings, or graphical power) or you can right click an app and Run on NVidia.

You can control which GPU on a program by program basis

or right click any app:

Right Click | Run with Graphics Processor

It has an Xbox Wireless Adapter built in

I got this for work, so it's not a gaming machine...BUT it's got that NVidia 1060 GPU and I just discovered there's an Xbox Wireless Adapter built-in. I thought this was just Bluetooth, but it's some magical low-latency thing. You can buy the $25 USB Xbox Wireless Adapter for your PC and use all your Xbox controllers with it - BUT it's built-in, so handled. What this means for me as a road warrior is that I can throw an Xbox Controller into my bag and play Xbox Play Anywhere games in my hotel.

Conclusion

All in all, I've had no issues with the Surface Book 2, given I stay on the released software (no Windows 10 Insiders Fast on this machine). It runs 2 external monitors (3 if you count its 15" display) and both compiles fast and plays games well.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

One Email Rule - Have a separate Inbox and an Inbox CC to reduce email stress. Guaranteed.

$
0
0

Two folders in your email client. One called I've mentioned this tip before but once more for the folks in the back. This email productivity tip is a game-changer for most information workers.

We all struggled with email.

  • Some of us just declare Email Bankruptcy every few months. Ctrl-A, delete, right? They'll send it again.
  • Some of us make detailed and amazing Rube Goldbergian email rules and deliberately file things away into folders we will never open again.
  • Some of us just decide that if an email scrolls off the screen, well, it's gone.

Don't let the psychic weight of 50,000 unread emails give you headaches. Go ahead, declare email bankruptcy - you're already in debt - then try this one email rule.

One Email Rule

Email in your inbox is only for email where you are on the TO: line.

All other emails (BCC'ed or CC'ed) should go into a folder called "Inbox - CC."

That's it.

I just got back from a week away. Look at my email there. 728 emails. Ugh. But just 8 were sent directly to me. Perhaps that's not a realistic scenario for you, sure. Maybe it'd be more like 300 and 400. Or 100 and 600.

Point is, emails you are CC'ed on are FYI (for your information) emails. They aren't Take Action Now emails. Now if they ARE, then you need to take a moment and train your team. Very simple, just reply and say, "oops, I didn't see this immediately because I was cc'ed. If you need me to see something now, please to: me." It'll just take a moment to "train" your coworkers because this is a fundamentally intuitive way to work. They'll say, "oh, make sense. Cool."

Try this out and I guarantee it'll change your workflow. Next, do this. Check your Inbox - CC less often than your Inbox. I check CC'ed email a few times a week, while I may check Inbox a few times a day.

If you like this tip, check out my complete list of Productivity Tips!


Sponsor: Unleash a faster Python Supercharge your applications performance on future forward Intel® platforms with The Intel® Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python* Now!



© 2017 Scott Hanselman. All rights reserved.
     

The Squishy Side of Open Source

$
0
0

The Squishy Side of Open SourceA few months back my friend Keeley Hammond and I did a workshop for Women Who Code Portland called The Squishy Side of Open Source. We'd done a number of workshops before on how to use Git and the Command Line, and I've done a documentary film with Rob Conery called Get Involved In Tech: The Social Developer (watch it free!) but Keeley and I wanted to really dive into the interpersonal "soft" or squishy parts. We think that we all need to work to bring kindness back into open source.

Contributing to open source for the first time can be scary and a little overwhelming. In addition to the technical skills required, the social dynamics of contributing to a library and participating in a code review can seem strange.

That means how people talk to each other, what to do when pull requests go south, when issues heat up due to misunderstandings,

Keeley has published the deck up on SpeakerDeck. In this workshop, we talked about the work and details that go into maintaining an open source community, tell real stories from his experiences and go over what to expect when contributing to open source and how to navigate it.

Key Takeaways:

  • Understanding the work that open source maintainers do, and how to show respect for them.
  • Understanding Codes of Conduct and Style Guides for OSS repos and how to abide by them.
  • Tips for communicating clearly, and dealing with uncomfortable or hostile communication.

Good communication is a key part of contributing to open source.

  • Give context.
  • Do your homework beforehand. It’s OK not to know things, but before asking for help, check a project’s README, documentation, issues (open or closed) and search the internet for an answer.
  • Keep requests short and direct. Many projects have more incoming requests than people available to help. Be concise.
  • Keep all communication public.
  • It’s okay to ask questions (but be patient!). Show them the same patience that you’d want them to show to you.
Keep it classy. Context gets lost across languages, cultures, geographies, and time zones. Assume good intentions in these conversations.

Where to start?

What are some good resources you've found for understanding the squishy side of open source?


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

az webapp new - Azure CLI extension to create and deploy a .NET Core or nodejs site in one command

$
0
0

az webapp newThe Azure CLI 2.0 (Command line interface) is a clean little command line tool to query the Azure back-end APIs (which are JSON). It's easy to install and cross-platform:

Once you got it installed, go "az login" and get authenticated. Also note that the most important switch (IMHO) is --output:

usage: az [-h] [--output {json,tsv,table,jsonc}] [--verbose] [--debug]

You can get json, tables (for humans), or tsv (tab separated values) for your awks and seds, and json (or the more condensed json-c).

A nice first command after "az login" is "az configure" which will walk you through a bunch of questions interactively to set up defaults.

Then I can "az noun verb" like "az webapp list" or "az vm list" and see things like this:

128→ C:\Users\scott> az webapp list

Name Location State ResourceGroup DefaultHostName
------------------------ ---------------- ------- -------------------------- ------------------------------------------
Hanselminutes North Central US Running Default-Web-NorthCentralUS Hanselminutes.azurewebsites.net
HanselmanBandData North Central US Running Default-Web-NorthCentralUS hanselmanbanddata.azurewebsites.net
myEchoHub-WestEurope West Europe Running Default-Web-WestEurope myechohub-westeurope.azurewebsites.net
myEchoHub-SouthEastAsia Southeast Asia Stopped Default-Web-SoutheastAsia myechohub-southeastasia.azurewebsites.net

The Azure CLI supports extensions (plugins) that you can easily add, and the Azure CLI team is experimenting with a few ideas that they are implementing as extensions. "az webapp new" is one of them so I thought I'd take a look. All of this is open source and on GitHub at https://github.com/Azure/azure-cli and is discussed in the GitHub issues for azure-cli-extensions.

You can install the webapp extension with:

az extension add --name webapp

The new command "new" (I'm not sure about that name...maybe deploy? or createAndDeploy?) is basically:

az webapp new --name [app name] --location [optional Azure region name] --dryrun

Now, from a directory, I can make a little node/express app or a little .NET Core app (with "dotnet new razor" and "dotnet build") then it'll make a resource group, web app, and zip up the current folder and just deploy it. The idea being to "JUST DO IT."

128→ C:\Users\scott\desktop\somewebapp> az webapp new  --name somewebappforme

Resource group 'appsvc_rg_Windows_CentralUS' already exists.
App service plan 'appsvc_asp_Windows_CentralUS' already exists.
App 'somewebappforme' already exists
Updating app settings to enable build after deployment
Creating zip with contents of dir C:\Users\scott\desktop\somewebapp ...
Deploying and building contents to app.This operation can take some time to finish...
All done. {
"location": "Central US",
"name": "somewebappforme",
"os": "Windows",
"resourcegroup": "appsvc_rg_Windows_CentralUS ",
"serverfarm": "appsvc_asp_Windows_CentralUS",
"sku": "FREE",
"src_path": "C:\\Users\\scott\\desktop\\somewebapp ",
"version_detected": "2.0",
"version_to_create": "dotnetcore|2.0"
}

I'd even like it to make up a name so I could maybe "az webapp up" or even just "az up." For now it'll make a Free site by default, so you can try it without worrying about paying. If you want to upgrade or change it, upgrade either with the az command or in the Azure portal. Also the site ends at up <name>.azurewebsites.net!

DO NOTE that these extensions are living things, so you can update after installing with

az extension update --name webapp

like I just did!

Again, it's super beta/alpha, but it's an interesting experiment. Go discuss on their GitHub issues.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Upgrading a 10 year old site to ASP.NET Core's Razor Pages using the URL Rewriting Middleware

$
0
0

Visual Studio Code editing my new ASP.NET Core site using Razor PagesMy podcast has over 600 episodes (Every week for many years, you do the math! And subscribe!) website was written in ASP.NET Web Pages many years ago. "Web Pages" (horrible name) was it's own thing. It wasn't ASP.NET Web Forms, nor was it ASP.NET MVC. However, while open-source and cross-platform ASP.NET Core uses the "MVC" pattern, it includes an integrated architecture that supports pages created with the model-view-controller style, Web APIs that return JSON/whatever from controllers, and routing system that works across all of these. It also includes "Razor Pages."

On first blush, you'd think Razor Pages is "Web Pages" part two. I thought that, but it's not. It's an alternative model to MVC but it's built on MVC. Let me explain.

My podcast site has a home page, a single episode page, and and archives page. It's pretty basic. Back in the day I felt an MVC-style site would just be overkill, so I did it in a page model. However, the code ended up (no disrespect intended) very 90s style PHPy. Basically one super-page with too much state management to all the URL cracking happening at the top of the page.

What I wanted was a Page-focused model without the ceremony of MVC while still being able to dip down into the flexibility and power of MVC when appropriate. That's Razor Pages. Best of all worlds and simply another tool in my toolbox. And the Pages (.cshtml) are Razor so I could port 90% of my very old existing code. In fact, I just made a new site with .NET Core with "dotnet new razor," opened up Visual Studio Code, and started copying over from (gasp) my WebMatrix project. I updated the code to be cleaner (a lot has happened to C# since then) and had 80% of my site going in a few hours. I'll switch Hanselminutes.com over in the next few weeks. This will mean I'll have a proper git checkin/deploy process rather than my "publish from WebMatrix" system I use today. I can containerize the site, run it on Linux, and finally add Unit Testing as I've been able to use pervasive Dependency Injection that's built into ASP.NET.

Merging the old and the new with the ASP.NET Core's URL Rewriting Middleware

Here's the thing though, there's parts of my existing site that are 10 years old, sure, but they also WORK. For example, I have existing URL Rewrite Rules from IIS that have been around that long. I'm pretty obsessive about making old URLs work. Never break a URL. No excuses.

There are still links around that have horrible URLs in the VERY original format that (not my fault) used database ids, like https://hanselminutes.com/default.aspx?ShowID=18570. Well, that database doesn't exist anymore, but I don't break URLs. I have these old URLs store along site my new system, and along with dozens of existing rewrite URLs I have an "IISUrlRewrite.xml" file. This was IIS specific and used with the IIS URL Rewrite Module, but you have all seen these before with things like Apache's ModRewrite. Those files are often loved and managed and carried around for years. They work. A lot of work went into them. Sure, I could rewrite all these rules with ASP.NET Core's routing and custom middleware, but again, they already work. I just want them to continue to work. They can with ASP.NET Core's Url Rewriting Middleware that supports Apache Mod Rewrite AND IIS Url Rewrite without using Apache or IIS!

Here's a complex and very complete example of mixing and matching. Mine is far simpler.

public void Configure(IApplicationBuilder app)

{
using (StreamReader apacheModRewriteStreamReader =
File.OpenText("ApacheModRewrite.txt"))
using (StreamReader iisUrlRewriteStreamReader =
File.OpenText("IISUrlRewrite.xml"))
{
var options = new RewriteOptions()
.AddRedirect("redirect-rule/(.*)", "redirected/$1")
.AddRewrite(@"^rewrite-rule/(\d+)/(\d+)", "rewritten?var1=$1&var2=$2",
skipRemainingRules: true)
.AddApacheModRewrite(apacheModRewriteStreamReader)
.AddIISUrlRewrite(iisUrlRewriteStreamReader)
.Add(MethodRules.RedirectXMLRequests)
.Add(new RedirectImageRequests(".png", "/png-images"))
.Add(new RedirectImageRequests(".jpg", "/jpg-images"));

app.UseRewriter(options);
}

app.Run(context => context.Response.WriteAsync(
$"Rewritten or Redirected Url: " +
$"{context.Request.Path + context.Request.QueryString}"));
}

Remember I have URLs like default.aspx?ShowID=18570 but I don't use default.aspx any more (literally doesn't exist on disk) and I don't use those IDs (they are just stored as metadata in a new system.

NOTE: Just want to point out that last line above there, where it shows the rewritten URL. Putting that in the logs or bypassing everything and outputting it as text is a nice way to debug and developer with this middleware, then comment it out as you get things refined and working.

I have an IIS Rewrite URL that looks like this. It lives in an XML file along with dozens of other rules. Reminder - there's no IIS in this scenario. We are talking about the format and reusing that format. I load my rewrite rules in my Configure() method in Startup:

using (StreamReader iisUrlRewriteStreamReader = 

File.OpenText("IISUrlRewrite.xml"))
{
var options = new RewriteOptions()
.AddIISUrlRewrite(iisUrlRewriteStreamReader);

app.UseRewriter(options);
}

It lives in the "Microsoft.AspNetCore.Rewrite" package that I added to my csproj with "dotnet add package Microsoft.AspNetCore.Rewrite." And here's the rule I use (one of many in the old xml file):

<rule name="OldShowId">

<match url="^.*(?:Default.aspx).*$" />
<conditions>
<add input="{QUERY_STRING}" pattern="ShowID=(\d+)" />
</conditions>
<action type="Rewrite" url="/{C:1}?handler=oldshowid" appendQueryString="false" />
</rule>

I capture that show ID and I rewrite (not redirect...we rewrite and continue on to the next segment of the pipeline) it to /18570?handler=oldshowid. That handler is a magic internal part of Razor Pages. Usually if you have a page called foo.cshtml it will have a method called OnGet or OnPost or OnHTTPVERB. But if you want multiple handlers per page you'll have OnGetHANDLERNAME so I have OnGet() for regular stuff, and I have OnGetOldShowId for this rare but important URL type. But notice that my implementation isn't URL-style specific. Razor Pages doesn't even know about that URL format. It just knows that these weird IDs have their own handler.

public async Task<IActionResult> OnGetOldShowId(int id)

{
var allShows = await _db.GetShows();

string idAsString = id.ToString();
LastShow = allShows.Where(c => c.Guid.EndsWith(idAsString)).FirstOrDefault();
if (LastShow == null) return Redirect("/"); //catch all error case, 302 to home
return RedirectPermanent(LastShow.ShowNumber.ToString()); // 301 to /showid
}

That's it. I have a ton more to share as I keep upgrading my podcast site, coming soon.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.


© 2017 Scott Hanselman. All rights reserved.
     

Running ASP.NET Core on GoDaddy's cheapest shared Linux Hosting - Don't Try This At Home

$
0
0

First, a disclaimer. Don't do this. I did this to test a theory and to prove a point. ASP.NET Core and the .NET Core that it runs on are open source and run pretty much anywhere. I wanted to see if I could run an ASP.NET Core site on GoDaddy's cheapest hosting ($3, although it scales to $8) that basically supports only PHP. It's not a full Linux VM. It's locked-down and limited. You don't have root. You are missing most tools you'd expect you'd have.

BUT.

I wanted to see if I could get ASP.NET Core running on it anyway. Maybe if I do, they (and other inexpensive hosts) will talk to the .NET team, learn that ASP.NET Core is open source and could easily run on their existing infrastructure.

AGAIN: Don't do this. It's hacky. It's silly. But it's hella cool. IMHO. Also, big thanks to Tomas Weinfurt for his help!

First, I went to GoDaddy and signed up for their cheap hosting. Again, not a VM, but their shared one. I also registered supercheapaspnetsite.com as well. They use a cPanel-based web management system that doesn't really let you do anything. You can turn on SSH, do some PHP stuff, and generally poke around, but it's not exactly low-level.

First I ssh (shoosh!) in and see what I'm working with. I'm shooshing with Ubuntu on Windows 10 feature, that every developer should turn on. It's makes it really easy to work with Linux hosts if you're starting from Linux on Windows 10.

secretname@theirvmname [/proc]$ cat version
Linux version 2.6.32-773.26.1.lve1.4.46.el6.x86_64 (mockbuild@build.cloudlinux.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC) ) #1 SMP Tue Dec 5 18:55:41 EST 2017
secretname@theirvmname [/proc]$

OK, looks like Red Hat, so CentOS 6 should be compatible.

I'm going to use .NET Core 2.1 (which is in preview now!) and get the SDK at https://www.microsoft.com/net/download/all and install it on my Windows machine where I will develop and build the app. I don't NEED to use Windows to do this, but it's the laptop I have and it's also nice to know I can build on Windows but target CentOS/RHEL6.

Next I'll make a new ASP.NET site with

dotnet new razor

and then I'll publish a self-contained version like this:

dotnet publish -r rhel.6-x64

And those files will end up in a folder like \supercheapaspnetsite\bin\Debug\netcoreapp2.1\rhel.6-x64\publish\

NOTE: You may need to add the NuGet feed for the dailies for this .NET Core preview in order to get the RHEL6 runtime downloaded during this local publish.

Then I used WinSCP (or whatever FTP/SCP client you like, rsync, etc) to get the files over to the ~/www folder on your GoDaddy shared site. Then I

chmod +x ./supercheapasnetsite

to make it executable. Now, from my ssh session at GoDaddy, let's try to run my app!

secretname@theirvmname [~/www]$ ./supercheapaspnetsite
Failed to load hb, error: libunwind.so.8: cannot open shared object file: No such file or directory
Failed to bind to CoreCLR at '/home/secretname/public_html/libcoreclr.so'

Of course it couldn't be that easy, right? .NET Core wants the unwind library (shared object) and it doesn't exist on this locked down system.

AND I don't have yum/apt/rpm or a way to install it right?

I could go looking for tar.gz file somewhere like this http://download.savannah.nongnu.org/releases/libunwind/ but I need to think about versions and make sure things line up. Given that I'm targeting CentOS6, I should start here https://centos.pkgs.org/6/epel-x86_64/libunwind-1.1-3.el6.x86_64.rpm.html and download libunwind-1.1-3.el6.x86_64.rpm.

I need to crack open that rpm file and get the library. RPM packages are just headers on top of a CPIO archive, so I can apt-get install rpm2cpio from my local Ubuntu instances (on Windows 10). Then from /mnt/c/users/scott/Downloads (where I downloaded the file) I will extract it.

rpm2cpio ./libunwind-1.1-3.el6.x86_64.rpm | cpio -idmv

There they are.

image

This part is cool. Even though I have these files, I don't have root or any way to "install" them. However I could either export/use the LD_LIBRARY_PATH environment variable to control how libraries get loaded OR I could put these files in $ORIGIN/netcoredeps. You can read more about Self Contained Linux Applications on .NET Core here.

The main executable of published .NET Core applications (which is the .NET Core host) has an RPATH property set to $ORIGIN/netcoredeps. That means that when Linux shared library loader is looking for shared libraries, it looks to this location before looking to default shared library locations. It is worth noting that the paths specified by the LD_LIBRARY_PATHenvironment variable or libraries specified by the LD_PRELOAD environment variable are still used before the RPATH property. So, in order to use local copies of the third-party libraries, developers need to create a directory named netcoredeps next to the main application executable and copy all the necessary dependencies into it.

At this point I've added a "netcoredeps" folder to my public folder, and then copied it (scp) over to GoDaddy. Let's run it again.

secretname@theirvmname [~/www]$ ./supercheapaspnetsite
FailFast: Couldn't find a valid ICU package installed on the system. Set the configuration flag System.Globalization.Invariant to true if you want to run with no globalization support.
   at System.Environment.FailFast(System.String)
   at System.Globalization.GlobalizationMode.GetGlobalizationInvariantMode()
   at System.Globalization.GlobalizationMode..cctor()
   at System.Globalization.CultureData.CreateCultureWithInvariantData()
   at System.Globalization.CultureData.get_Invariant()
   at System.Globalization.CultureInfo..cctor()
   at System.StringComparer..cctor()
   at System.AppDomain.InitializeCompatibilityFlags()
   at System.AppDomain.Setup(System.Object)
Aborted

Ok, now it's complaining about ICU packages. These are for globalization. That is also mentioned in the self-contained-linux apps docs and there's a precompiled binary I could download. But there's options.

If your app doesn't explicitly opt out of using globalization, you also need to add libicuuc.so.{version}, libicui18n.so.{version}, and libicudata.so.{version}

I like "opt-out" so I don't have to go dig these ups (although I could) so I can either set the CORECLR_GLOBAL_INVARIANT env var to 1, or I can add System.Globalization.Invariant = true to supercheapaspnetsite.runtimeconfig.json, which I'll do with just to be obnoxious. ;)

When I run it again I get another complained about libuv. Yet another shared library that isn't installed on this instance. I could  go get it and put it in netcoredeps OR since I'm using the .NET Core 2.1, I could try something new. There were some improvements made in .NET Core 2.1 around sockets and http performance. On the client side, these new managed libraries are written from the ground up in managed code using the new high-performance Span<T> and on the server-side I could use Kestrel's (Kestrel is the .NET Core webserver) experimental UseSockets() as they are starting to move that over.

In other words, I can bypass libuv usage entirely by changing my Program.cs to use the use UseSockets() like this.

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
     WebHost.CreateDefaultBuilder(args)
     .UseSockets()
     .UseStartup<Startup>();

Let's run it again. I'll add the ASPNETCORE_URLS environment variable and set it to a high port like 8080. Remember, I'm not admin so I can't use any port under 1024.

secretname@theirvmname [~/www]$ export ASPNETCORE_URLS="http://*:8080"
secretname@theirvmname [~/www]$ ./supercheapaspnetsite
Hosting environment: Production
Content root path: /home/secretname/public_html
Now listening on: http://0.0.0.0:8080
Application started. Press Ctrl+C to shut down.

Holy crap it actually started.

Ok, but I can't access it from supercheapaspnetsite.com:8080 because this is GoDaddy's locked down managed shared hosting. I can't just open a port or forward a port in their control panel.

But. They use Apache, and that has the .htaccess file!

Could I use mod_proxy and try this?

ProxyPassReverse / http://127.0.0.1:8080/

Looks like no, they haven't turned this on. Likely they don't want to proxy off to external domains, but it'd be nice if they allowed localhost. Bummer. So close.

Fine, I'll proxy the traffic myself. (Not perfect, but this is all a spike)

RewriteRule ^(.*)$  "show.php" [L]

Cool, now a cheesy proxy goes in show.php.

<?php
$site = 'http://127.0.0.1:8080';
$request = $_SERVER['REQUEST_URI'];
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $site . $request);
curl_setopt($ch, CURLOPT_HEADER, TRUE);
$f = fopen("headers.txt", "a");
    curl_setopt($ch, CURLOPT_VERBOSE, 0);
    curl_setopt($ch, CURLOPT_STDERR, $f);
    #don't output curl response, I need to strip the headers.
    #yes I know I can just CURLOPT_HEADER, false and all this 
    # goes away, but for testing we log headers
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
    curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$hold = curl_exec($ch);
#strip headers
$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
$headers = substr($hold, 0, $header_size);
$response = substr($hold, $header_size);
$headerArray = explode(PHP_EOL, $headers);
echo $response; #echo ourselves. Yes I know curl can do this for us.
?>

Cheesy, yes. Works for GET? Also, yes. This really is Apache's job, not ours, but kudos to Tomas for this evil idea.

An ASP.NET Core app at a host that doesn't support it

Boom. How about another page at /about? Yes.

Another page with ASP.NET Core at a host that doesn't support it

Lovely. But I had to run the app myself. I have no supervisor or process manager (again this is already handled by GoDaddy for PHP but I'm in unprivileged world.) Shooshing in and running it is a bad idea and not sustainable. (Well, this whole thing is not sustainable, but still.)

We could copy "screen" over and start it up and detach like use screen ./supercheapaspnet app, but again, if it crashes, no one will start it. We do have crontab, so for now, we'll launch the app on a schedule occasionally to do a health check and if needed, keep it running. Also added a few debugging tools in ~/bin:

secretname@theirvmname [~/bin]$ ll
total 304
drwxrwxr-x  2    4096 Feb 28 20:13 ./
drwx--x--x 20    4096 Mar  1 01:32 ../
-rwxr-xr-x  1  150776 Feb 28 20:10 lsof*
-rwxr-xr-x  1   21816 Feb 28 20:13 nc*
-rwxr-xr-x  1  123360 Feb 28 20:07 netstat*

All in all, not that hard. ASP.NET Core and .NET Core underneath it can run pretty much anywhere, just like PHP, Python, whatever.

If you're a host and you want to talk to someone at Microsoft about setting up ASP.NET Core shared hosting, email Sourabh.Shirhatti@microsoft.com and talk to them! If you are GoDaddy, I apologize, and you should also email. ;)


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

A multi-player server-side GameBoy Emulator written in .NET Core and Angular

$
0
0

Server-side GameBoyOne of the great joys of sharing and discovering code online is when you stumble upon something so truly epic, so amazing, that you have to dig in. Head over to https://github.com/axle-h/Retro.Net and ask yourself why this GitHub project has only 20 stars?

Alex Haslehurst has created some retro hardware libraries in open source .NET Core with an Angular Front End!

Translation?

A multiplayer server-side Game Boy emulator. Epic.

You can run it in minutes with

docker run -p 2500:2500 alexhaslehurst/server-side-gameboy

Then just browse to http://localhost:2500 and play Tetris on the original GameBoy!

I love this for a number of reasons.

First, I love his perspective:

Please check out my GameBoy emulator written in .NET Core; Retro.Net. Yes, a GameBoy emulator written in .NET Core. Why? Why not. I plan to do a few write-ups about my experience with this project. Firstly: why it was a bad idea.

  1. Emulation on .NET
  2. Emulating the GameBoy CPU on .NET

The biggest issue one has trying to emulate a CPU with a platform like .NET is the lack of reliable high-precision timing. However, he manages a nice from-scratch emulation of the Z80 processor, modeling low level things like registers in very high level C#. I love that public class GameBoyFlagsRegister is a thing. ;) I did similar things when I ported a 15 year old "Tiny CPU" to .NET Core/C#.

Address space diagram from https://ax-h.com/software/development/emulation/2017/12/03/emulating-the-gameboy-cpu-on-dot-net.html

Be sure to check out Alex's extremely detailed explanation on how he modeled the Z80 microprocessor.

Luckily the GameBoy CPU, a Sharp LR35902, is derived from the popular and very well documented Zilog Z80 - A microprocessor that is unbelievably still in production today, over 40 years after it’s introduction.

The Z80 is an 8-bit microprocessor, meaning that each operation is natively performed on a single byte. The instruction set does have some 16-bit operations but these are just executed as multiple cycles of 8-bit logic. The Z80 has a 16-bit wide address bus, which logically represents a 64K memory map. Data is transferred to the CPU over an 8-bit wide data bus but this is irrelevant to simulating the system at state machine level. The Z80 and the Intel 8080 that it derives from have 256 I/O ports for accessing external peripherals but the GameBoy CPU has none - favouring memory mapped I/O instead

He didn't just create an emulator - there's lots of those - but uniquely he runs it on the server-side while allowing shared controls in a browser. "In between each unique frame, all connected clients can vote on what the next control input should be. The server will choose the one with the most votes… most of the time." Massively multi-player online GameBoy! Then he streams out the next frame! "GPU rendering is completed on the server once per unique frame, compressed with LZ4 and streamed out to all connected clients over websockets."

This is a great learning repository because:

  • it has complex business logic on the server-side but the front end uses Angular and web-sockets and open web technologies.
  • It's also nice that he has a complete multi-stage Dockerfile that is itself a great example of how to build both .NET Core and Angular apps in Docker.
  • Extensive (thousands) of Unit Tests with the Shouldly Assertion Framework and Moq Mocking Framework.
  • Great example usages of Reactive Programming
  • Unit Testing on both server AND client, using Karma Unit Testing for Angular

Here's a few favorite elegant code snippets in this huge repository.

The Reactive Button Presses:

_joyPadSubscription = _joyPadSubject
    .Buffer(FrameLength)
    .Where(x => x.Any())
    .Subscribe(presses =>
                {
                    var (button, name) = presses
                        .Where(x => !string.IsNullOrEmpty(x.name))
                        .GroupBy(x => x.button)
                        .OrderByDescending(grp => grp.Count())
                        .Select(grp => (button: grp.Key, name: grp.Select(x => x.name).First()))
                        .FirstOrDefault();
                    joyPad.PressOne(button);
                    Publish(name, $"Pressed {button}");
                    Thread.Sleep(ButtonPressLength);
                    joyPad.ReleaseAll();
                });

The GPU Renderer:

private void Paint()
{
    var renderSettings = new RenderSettings(_gpuRegisters);
    var backgroundTileMap = _tileRam.ReadBytes(renderSettings.BackgroundTileMapAddress, 0x400);
    var tileSet = _tileRam.ReadBytes(renderSettings.TileSetAddress, 0x1000);
    var windowTileMap = renderSettings.WindowEnabled ? _tileRam.ReadBytes(renderSettings.WindowTileMapAddress, 0x400) : new byte[0];
    byte[] spriteOam, spriteTileSet;
    if (renderSettings.SpritesEnabled) {
        // If the background tiles are read from the sprite pattern table then we can reuse the bytes.
        spriteTileSet = renderSettings.SpriteAndBackgroundTileSetShared ? tileSet : _tileRam.ReadBytes(0x0, 0x1000);
        spriteOam = _spriteRam.ReadBytes(0x0, 0xa0);
    }
    else {
        spriteOam = spriteTileSet = new byte[0];
    }
    var renderState = new RenderState(renderSettings, tileSet, backgroundTileMap, windowTileMap, spriteOam, spriteTileSet);
    var renderStateChange = renderState.GetRenderStateChange(_lastRenderState);
    if (renderStateChange == RenderStateChange.None) {
        // No need to render the same frame twice.
        _frameSkip = 0;
        _framesRendered++;
        return;
    }
    _lastRenderState = renderState;
    _tileMapPointer = _tileMapPointer == null ? new TileMapPointer(renderState) : _tileMapPointer.Reset(renderState, renderStateChange);
    var bitmapPalette = _gpuRegisters.LcdMonochromePaletteRegister.Pallette;
    for (var y = 0; y < LcdHeight; y++) {
        for (var x = 0; x < LcdWidth; x++) {
            _lcdBuffer.SetPixel(x, y, (byte) bitmapPalette[_tileMapPointer.Pixel]);
            if (x + 1 < LcdWidth) {
                _tileMapPointer.NextColumn();
            }
        }
        if (y + 1 < LcdHeight){
            _tileMapPointer.NextRow();
        }
    }
    
    _renderer.Paint(_lcdBuffer);
    _frameSkip = 0;
    _framesRendered++;
}

The GameBoy Frames are composed on the server side then compressed and sent to the client over WebSockets. He's got backgrounds and sprites working, and there's still work to be done.

The Raw LCD is an HTML5 canvas:

<canvas #rawLcd [width]="lcdWidth" [height]="lcdHeight" class="d-none"></canvas>
<canvas #lcd
        [style.max-width]="maxWidth + 'px'"
        [style.max-height]="maxHeight + 'px'"
        [style.min-width]="minWidth + 'px'"
        [style.min-height]="minHeight + 'px'"
        class="lcd"></canvas>

I love this whole project because it has everything. TypeScript, 2D JavaScript Canvas, retro-gaming, and so much more!

const raw: HTMLCanvasElement = this.rawLcdCanvas.nativeElement;
const rawContext: CanvasRenderingContext2D = raw.getContext("2d");
const img = rawContext.createImageData(this.lcdWidth, this.lcdHeight);
for (let y = 0; y < this.lcdHeight; y++) {
  for (let x = 0; x < this.lcdWidth; x++) {
    const index = y * this.lcdWidth + x;
    const imgIndex = index * 4;
    const colourIndex = this.service.frame[index];
    if (colourIndex < 0 || colourIndex >= colours.length) {
      throw new Error("Unknown colour: " + colourIndex);
    }
    const colour = colours[colourIndex];
    img.data[imgIndex] = colour.red;
    img.data[imgIndex + 1] = colour.green;
    img.data[imgIndex + 2] = colour.blue;
    img.data[imgIndex + 3] = 255;
  }
}
rawContext.putImageData(img, 0, 0);
context.drawImage(raw, lcdX, lcdY, lcdW, lcdH);

I would encourage you to go STAR and CLONE https://github.com/axle-h/Retro.Net and give it a run with Docker! You can then use Visual Studio Code and .NET Core to compile and run it locally. He's looking for help with GameBoy sound and a Debugger.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Major build speed improvements - Try .NET Core 2.1 Preview 1 today

$
0
0

Head over to the main .NET Core download page and pick up .NET Core 2.1 - Preview 1.

The SDK contains the tools you need to build and run apps with .NET Core and supports Mac, Windows, Ubuntu, Debian, Fedora, CentOS/Oracle, openSUSE, and we even have Docker images for Stretch, Alpine, and more. It's not your grandmother's Microsoft. ;)

Once you've installed it, from a prompt type "dotnet" and note a few new built-in switches:

C:\Users\scott> dotnet
Usage: dotnet [options]
Usage: dotnet [path-to-application]
Options:
  -h|--help         Display help.
  --version         Display the current SDK version.
  --list-sdks       Display the installed SDKs.
  --list-runtimes   Display the installed runtimes.
path-to-application:
  The path to an application .dll file to execute.

I'll run it again twice with --list-sdks and --list-runtimes:

C:\Users\scott> dotnet --list-sdks
2.1.300-preview1-008174 [C:\Program Files\dotnet\sdk]
2.1.4 [C:\Program Files\dotnet\sdk]

C:\Users\scott> dotnet --list-runtimes Microsoft.AspNetCore.All 2.1.0-preview1-final [C:\Program Files\dotnet\shared] Microsoft.AspNetCore.App 2.1.0-preview1-final [C:\Program Files\dotnet\shared] Microsoft.NETCore.App 2.0.5 [C:\Program Files\dotnet\shared] Microsoft.NETCore.App 2.1.0-preview1-26216-03 [C:\Program Files\dotnet\shared]

There's a few interesting things happening here. Youc an see before I had the runtime for .NET Core 2.0.5, and now I also have the 2.1.0 Preview.

It can also be a little confusing that the SDK and Runtime sometimes have different versions, similar to JREs and JDKs. Simply stated - the thing that builds sometimes updates while then thing that runs doesn't. So the .NET Core SDK and compilers might get fixes but the runtime doesn't. I'm told they're going to line things up better. You can read deeply on versioning if you like.

You'll also notice AspNetCore.App, which is a metapackage (package of packages) that's got less than All and helps you make smaller apps.

If you install a beta or preview you might be worried it'll mess stuff up. It won't.

You can type "dotnet new globaljson" and make a file that looks like this! Then "pin" the SDK version you want to use:

{
  "sdk": {
    "version": "2.1.300-preview1-008174"
  }
}

I'll change this to .NET Core's older SDK and try building the .NET Core based Gameboy Emulator in my last post. It's amazing.

Let's see how fast it builds today on .NET 2.0:

C:\github\Retro.Net> Measure-Command { dotnet build }
Milliseconds      : 586
Ticks             : 65864065
TotalSeconds      : 6.5864065
TotalMilliseconds : 6586.4065

Ok, about 6.5 seconds on my Surface.

Let's make the SDK version the new .NET Core 2.1 Preview 1 - it has a bunch of build speed improvements.

All I have to do is change the global.json file. Update the sdk version in the global.json and type "dotnet --version" to see that it took.

I can have as many .NET Core SDKs as I like on my machine and I can control what SDK versions are being used on a tree by tree basis. That means you CAN download .NET Core 2.1 and not mess things up if you're paying attention.

C:\github\Retro.Net> Measure-Command { dotnet build }
Milliseconds      : 727
Ticks             : 27270864
TotalSeconds      : 2.7270864
TotalMilliseconds : 2727.0864

Hey it's less than 3 seconds. 2.7 in fact! More than twice as fast.

Build times as much as 10x faster

The bigger the app, the faster incremental builds should be. In some cases we will see (by release) 10x improvements.

It's quick to install (and quick to uninstall) and you can control the SDK version (list them with "dotnet --list-sdks") with the global.json.

Please go download the preview and let me know either on Twitter or in the comments what your before and after build times are!


Sponsor: Unleash a faster Python! Supercharge your applications performance on future forward Intel platforms with The Intel Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python Now!



© 2017 Scott Hanselman. All rights reserved.
     

Upgrading my podcast site to ASP.NET Core 2.1 in Azure plus some Best Practices

$
0
0

I am continuing to upgrade to podcast's site. Today I upgraded it to .NET Core 2.1, keeping the work going from my upgrade from "Web Matrix WebPages" from last week. I upgraded to actually running ASP.NET Core 2.1's preview in Azure by following this blog post.

Pro Tip: Be aware, you can still get up to 10x faster local builds but still keep your site's runtime as 2.0 to lower risk. So there's little reason to not download the .NET Core 2.1 Preview and test your build speeds.

At this point the podcast site is live in Azure at https://hanselminutes.com. Now that I've moved off of the (very old) site I've quickly set up some best practices in just a few hours. I should have taken the time to upgrade this site - and its "devops" a long time ago.

Here's a few things I was able to get done just this evening while the boys' did homework. Each of these tasks were between 5 and 15 min. So not a big investment, but they represented real value I'd been wanting to add to the site.

Git Deploy for Production

The podcast site's code now lives in GitHub and deployment to production is a git push to master.

Deploying from GitHub

A "deployment slot" for staging

Some people like to have the master branch be Production, then they make a branch called Staging for a secondary staging site. Since Azure App Services (WebSites) has "deployment slots" I choose to do it differently. I deploy to Production from GitHub, sure, but I prefer to push manually to staging rather than litter my commits (and clean them up or squash commits later - it's just my preference) with little stuff.

I hooked up Git Deployment but the git repro is in Azure and just for deploy. Then "git remote add azure ..." so when I want to deploy to staging it's:

git push staging

I use it for testing, so ya, it could have been test/dev, etc, but you get the idea. Plus the Deployment Slot/Staging Site is free as it's on the same Azure App Service Plan.

A more sophisticated - but just as easy - plan would be to push to staging, get it perfect then do a "hot swap" with a single button click.

Deployment Slots can have their own independent settings if you click "Slot Setting." Here I've set that this ASPNETCORE_ENVIRONMENT is "Staging" while the main one is "Production."

Staging Slots in Azure

The ASP.NET Core runtime picks up that environment variable and I can conditionally run code based on Environment. I run as "Development" on my local machine. For example:

if (env.IsDevelopment()){
    app.UseDeveloperExceptionPage();
}
else{
    app.UseExceptionHandler("/Error");
}

Don't let Google Index the Staging Site - No Robots

You should be careful to not let Google/Bing/DuckDuckGo index your staging site if it's public. Since I have an environment set on my slot, I can just add this Meta Robots element to the site's main layout. Note also that I use minified CSS when I'm not in Development.

<environment include="Development">
    <link rel="stylesheet" href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~~/css/site.css" />
</environment>
<environment exclude="Development">
    <link rel="stylesheet" href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~~/css/site.min.css" />
</environment>
<environment include="Staging">
    <meta name="robots" content="noindex, follow">
</environment>

Require SSL

Making the whole ASP.NET Core site use SSL has been on my list as well. I added my SSL Certs in the Azure Portal that added RequreHttps in my Startup.cs pretty easily.

I could have also added it to the existing IISRewriteUrls.xml legacy file, but this was easier and faster.

var options = new RewriteOptions().AddRedirectToHttps();

Here's how I'd do via IIS Rewrite Middleware, FYI:

<rule name="HTTP to HTTPS redirect" stopProcessing="true">
    <match url="(.*)" />
    <conditions>
       <add input="{HTTPS}" pattern="off" ignoreCase="true" />
    </conditions>
    <action type="Redirect" url="https://{HTTP_HOST}/{R:1}"
redirectType="Permanent" />
</rule>

Application Insights for ASP.NET Core

Next post I'll talk about Application Insights. I was able to set it up both client- and server-side and get a TON of info in about 15 minutes.

Application Insights

How are you?


Sponsor: Unleash a faster Python! Supercharge your applications performance on future forward Intel platforms with The Intel Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python Now!


© 2017 Scott Hanselman. All rights reserved.
     

Setting up Application Insights took 10 minutes. It created two days of work for me.

$
0
0

I've been upgrading my podcast site from a 10 year old WebMatrix site to modern, open source ASP.NET Core with Razor Pages. It's off IIS and now running cross-platform in Azure.

I added Application Insights to the site in about 10 min just a few days ago. It was super easy to setup and basically automatic in Visual Studio 2017 Community. I left the defaults, installed a bit of script on the client, and enabled the server side, and AppInsights already found a few interesting things.

It took 10 minutes to set up App Insights. It too two days (and work continues) to fix what it found. I love it. This tool has already give me a deeper insight into how my code runs and how it's behaving - and I'm just scratching the service. I'll need to do some videos and/or more blog posts to dig deeper. Truly, you need to try it.

Slow performance in other countries

I could fill this blog post with dozens of awesome screenshots of the useful charts, graphs, and filters that I got by just turning on AppInsights. But the most interesting part is that I turned it on really expecting nothing. I figured I'd get some "Google Analytics"-type behavior.

Then I got this email:

Browser Time is slow in Bangladesh

Huh. I had set up the Azure CDN at images.hanselminutes.com to handle all the faces for each episode. I then added lazy loading so that the webite only loads the images that enter the browser's viewport. I figured I was pretty much done.

However I didn't really think about the page itself as it loads for folks from around the world - given that it's hosted on Azure in the West US.

18.4 secs to load the page in Bangladesh

Ideally I'd want the site to load in less than a second, but this is my archives page with 600 shows so it's pretty heavy.

That's some long load times

Yuck. I have a few options. I could pay and load up another copy of the site in South Asia and then do some global load balancing. However, I'm hosting this on a single small (along with a dozen other sites) so I don't want to really pay much to fix this.

I ended up signing up for a free account at CloudFlare and set up caching for my HTML. The images stay the same. served by the Azure CDN.

Lots of requests from Cloudflare

Fixing Random and regular Server 500 errors

I left the site up for a while and came back later to a warning. You can see my site availability is just 93%. Note that there's "2 Servers?" That's because one is my local machine! Very cool that AppInsights also (optionally) tracks your local development server as well.

1 Alert!

When I dig in I see a VERY interesting sawtooth pattern.

Pro Tip - Recognizing that a Sawtooth Pattern is a Bad Thing (tm) is an important DevOps thing. Why is this happening regularly? Is it exactly regularly (like every 4 hours on a schedule?) or somewhat regularly (like a garbage collection issue?)

What do these operations have in common? Look closely.

scarygraph

It's not a GET it's a HEAD. Remember that HTTP Verbs are more than GET, POST, PUT, DELETE. There's also HEAD. It literally is a HEADer call. Like a GET, but no body.

HTTP HEAD - The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response.

I installed HTTPie - which is like curl or wget for humans - and issue a HEAD command from my local machine while under the debugger.

C:>http --verify=no HEAD https://localhost:5001

HTTP/1.1 500 Internal Server Error
Content-Type: text/html; charset=utf-8
Date: Tue, 13 Mar 2018 03:41:51 GMT
Server: Kestrel

Ok that is bad. See the 500? I check out AppInsights and see it has the full call stack. See it's getting a NullReferenceException as it tries to Render() the Razor page?

Null Reference Exception

It turns out since I'm using Razor Pages, I have implemented "OnGet" where I do my data base work then pass a model to the pages to generate HTML. However, if someone issues a HEAD, then the pages still run but the local data work never happened (I have no OnHead() call). I have a few options here. I could handle HEAD myself. I could no-op it, but that'd be a lie.

THOUGHT: I think this behavior is sub-optimal. While GET and POST are distinct and it makes sense to require an OnGet() and OnPost(), I think that HEAD is special. It's basically a GET with a "don't return the body" flag set. So why not have Razor Pages automatically delegate OnHead to OnGet, unless there's an explicit OnHead() declared? I'll file an issue on GitHub because I don't like this behavior and I find it counter-intuitive. I could also register a global IPageFilter to make this work for all my site's pages.

The simplest thing to do is just to delegate the OnHead to to the OnGet handler.

public Task OnHeadAsync(int? id, string path) => OnGetAsync(id, path);

Then double check and test it with HTTPie:

C:\>http --verify=no HEAD https://localhost:5001

HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Date: Tue, 13 Mar 2018 03:53:55 GMT
Request-Context: appId=cid-v1:e310025f-88e9-4133-bc15-e775513c67ac
Server: Kestrel

Bonus - Application Map

Since I have AppInsights enabled on both the client and the server, I can see this cool live Application Map. I'll check again in a few days to see if I have fewer errors. You can see where my Podcast Site calls into the backend data service at Simplecast.

An application map that shows all the components, both client and server

I saw a few failures in my call to SimpleCast's API as I was failing to consistently set my API key. Everything in this map can be drilled down into.

Bonus - Web Performance Testing

I figured while I was in the Azure Portal I would also take advantage of the free performance testing. I did a simulated aggressive 250 users beating on the site. Average response time is 1.22 seconds and I was doing over 600 req/second.

38097 successful calls

I am learning a ton of stuff. I have more things to fix, more improvements to make, and more insights to dig into. I LOVE that it's creating all this work for me because it's giving me a better application/website!

You can get a free Azure account at http://azure.com/free or check out Azure for Startups https://azure.microsoft.com/overview/startups/ and get a bunch of free Azure time. AppInsights works with Node, Docker, Java, ASP.NET, ASP.NET Core, and other platforms. It even supports telemetry in Electron or Windows Apps.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Cross-platform GUIs with open source .NET using Eto.Forms

$
0
0

Amazing Cross Platform ANSI art editorThis is one of those "Did you know you could do THAT?" Many folks have figured out that C#/F#/.NET is cross-platform and open0source and runs on basically any operating system. People are using it to create micro services, web sites, and webAPI's all over. Not to mention iPhone/Android apps with Xamarin and video games with Unity and MonoGame.

But what about cross platform UIs?

While not officially supported by Microsoft - you can do some awesome stuff...as is how Open Source is supposed to work! Remember that there's a family of .NET Runtimes now, there's the .NET Framework on Windows, there's xplat .NET Core, and there's xplat Mono.

Eto.Forms has been in development since 2012 and is a cross-platform framework for creating GUI (Graphical User Interface, natch) applications with .NET that run across multiple platforms using their native toolkit. Not like Java in the 90s with custom painted buttons on canvas.

It's being used for real stuff! In fact, PabloDraw is an Ansi/Ascii text editor that you didn't know you needed in your life. But you do. It runs on Windows, Mac, and Linux and was written using Eto.Forms but has a native UI on each platform. Be sure to check out Curtis Wensley's Twitter account for some cool examples of what PabloDraw and Eto.Forms can do!

  • OS X: MonoMac or Xamarin.Mac (and also iOS via Xamarin)
  • Linux: GTK# 2 or 3
  • Windows: Windows Forms (using GDI or Direct2D) or WPF

Here's an example Hello World. Note that it's not just Code First, you can also use Xaml, or even Json (.jeto) to layout your forms!

using Eto.Forms;

using Eto.Drawing;

public class MyForm : Form
{
public MyForm ()
{
Title = "My Cross-Platform App";
ClientSize = new Size(200, 200);
Content = new Label { Text = "Hello World!" };
}

[STAThread]
static void Main()
{
new Application().Run(new MyForm());
}
}

Or I can just File | New Project with their Visual Studio Extension. You should definitely give it a try.

image

Even on the same platform (Windows in the below example) amazingly Eto.Forms can use whatever Native Controls you prefer. Here's a great example zip that has precompiled test apps.

WinForms, WPF, and Direct2D apps

Once you've installed a new version of Mono on Ubuntu, you can run the same sample as Gtk3, as I'm doing here in a VM. AMAZING.

image

Here's some example applications that are in the wild, using Eto.Forms:

There's so much cool stuff happening in open source .NET right now, and Eto.Forms is actively looking for help. Go check out their excellent Wiki, read the Tutorials, and maybe get involved!


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     
Viewing all 1148 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>