Quantcast
Channel: Scott Hanselman's Blog
Viewing all 1148 articles
Browse latest View live

BUILD 2017 Conference Rollup for .NET Developers

$
0
0

The BUILD Conference was lovely this last week, as was OSCON. I was fortunate to be at both. You can watch all the interviews and training sessions from BUILD 2017 on Channel 9.

Here's a few sessions that you might be interested in.

Scott Hunter, Kasey Uhlenhuth, and I had a session on .NET Standard 2.0 and how it fit into a world of .NET Core, .NET (Full) Framework, and Mono/Xamarin.

One of the best demos, IMHO, in this talk, was taking an older .NET 4.x WinForms app, updating it to .NET 4.7 and automatically getting HiDPI support. Then we moved it's DataSet-driven XML Database layer into a shared class library that targeted .NET Standard. Then we made a new ASP.NET Core 2.0 application that shared that new .NET Standard 2.0 library with the existing WinForms app. It's a very clear example of the goal of .NET Standard.

.NET Core 2.0 Video

Then, Daniel Roth and I talked about ASP.NET Core 2.0

ASP.NET Core 2.0 Video

Maria Naggaga talked about Support for ASP.NET Core. What's "LTS?" How do you balance purchased software that's supported and open source software that's supported?

Support for ASP.NET and .NET - What's an LTS?

Mads Torgersen and Dustin Campbell teamed up to talk about the Future of C#!

The Future of C#

David Fowler and Damian Edwards introduced ASP.NET Core SignalR!

SignalR for .NET Core

There's also a TON of great 10-15 min short BUILD videos like:

As for announcements, check these out:

And best of all...All .NET Core 2.0 and .NET Standard 2.0 APIs are now on http://docs.microsoft.com at https://docs.microsoft.com/en-us/dotnet

Enjoy!


Sponsor: Test your application against full-sized database copies. SQL Clone allows you to create database copies in seconds using MB of storage. Create clones instantly and test your application as you develop.


© 2017 Scott Hanselman. All rights reserved.
     

Suggestions and Tips for attending your first tech conference

$
0
0
WoC Tech Chat used under CC

This last week Joseph Phillips tweeted that he was going to his first big tech conference and wanted some tips and suggestions. I have a TON of tips, but I know YOU have more, so I retweeted his request and prompted folks to reply. This was well timed as I had just gotten back from OSCON and BUILD, two great conferences.

The resulting thread was fantastic, so I've pulled some of the best recommendations out. As per usual, the Community has some great ideas and you should check them out!

  • @saraford - Whenever you get a biz card write down why you met them or what convo was about. It might seem obvious at time but you wont remember at home
  • @arcdigg - Meet people and speakers. Tech is part of your success, but growing your network matters too. Conf can give you both or not. Up to you!
  • @marypcbuk - if approaching people is hard for you, just ask 'what do you work on?'
  • @ohhoe - don't be afraid to introduce yrself to people! let them know its yr first conference, often people will introduce you to other people too :)
  • @IrishSQL - connect with a few attendees/speakers online prior to event, and bring plenty of business cards. When u get one, write details on back
  • @arcdigg - Backpack and sneakers beat cute laptop bag and heels (ed: dress comfortably)
  • @scribblingon - You might feel left out & think everyone knows everyone else. Don't be afraid to approach people & talk even if seems random sometimes :) If you liked someone's talk, strike a convo & tell them that!!
  • @arcdigg - Plan session attendance in advance, have a backup in case the session is full.
  • @jesslynnrose - Reach out to some other folks who are using the hashtag before you get there, events can be cliquey, say hi and make friends before you go!
  • @thelarkinn - Never feel afraid to say hi to maintainers, and speakers!!!! Especially if you want to help!
  • @everettharper - Pick 3 ppl you want to meet. Prep 1 Q for each. Go early, find person #1 in the 1st hr before crowds. 1/3 done = momentum for rest of day!
  • @jorriss - Meet people. Skip sessions. You'll get more from meeting and talking with people then sitting in the sessions. #hallwaytrack
  • @stabbycutyou - Leave room in your schedule, Meet people, Eavesdrop on hallway convos, Take notes, Present on them at your job
  • @patrickfoley - Don't forget to sleep. Evidence that long-term memories get "written" then
  • @david_t_macknet - Drinking will not help you remember it better or have a better time mingling. Most of us are just as introverted & the awkwardness fades.
  • @carlowahlstedt - Don't feel like you have to go to EVERY session.
  • @davidpine7 - Try your best to NOT be an introvert -- in our industry that can be challenging, but if you put yourself out there...you will not regret it!
  • @frontvu - Don't rely on the conference wifi
  • @shepherddad - Put snacks in your bag or pocket.
  • @sod1102 - Find out if there will be slides (and even better!) video available post conference, then don't worry about missing stuff and relax & enjoy
  • @rnelson0 - Take notes. Live tweet, carry a notebook, jot it all down at 1am before sleeping, whatever method helps you remember what you did.
  • @hoyto - Sit [at] meal tables with random people and introduce yourself.
  • @_s_hari - Ask speaker when *not* to use product/methodology that they're speaking on. If they cannot explain that, then it's just a marketing session
  • @EricFishor - Don't be afraid to discreetly leave or enter an on going session. It's up to you to seek out sessions that interest you.
  • @texmandie - If you get to meet and talk to your heroes, don't freak out - they're normal people who happen to do cool stuff
  • @wilbers_ke - Greatest connections happen in the hallways, coffee queue and places with animated humans. Minimize seated conference halls
  • @CJohnsonO365 - CLEAR YOUR SCHEDULE. Don’t try to get “regular” work done during the conference— you’ll end up missing something important!
  • @g33konaut - Tweet with the conf hashtag to ask if people wanna meet and talk or hangout after the conference, also follow the hashtag tweets to find ppl. Don't sweat missing a talk, meeting people and talking to them is always better than than seeing a talk. Also the talks are often recorded
  • @foxdeploy - Who cares about swag, it's all about connections. Meet the people who've helped you over the years and say thanks.
  • @jfletch - Ask people which after parties they are attending. Great way to find out about smaller/more interesting events and get yourself invited!
  • @marxculture - The Law of Two Feet - if you aren't enjoying a session then leave. Go to at least one thing outside your normal sphere.
  • @joshkodroff - Bring work business cards if you're not looking for a job, personal business cards if you are.
  • @benjimawoo - Go to sessions that cover tehnologies you wouldn't otherwise encounter day to day. Techs you don't use in your day job.

Fantastic stuff. You'll get more out of a conference if you say hello, include the "hallway track" in your planning, stay off your phone and laptop, and check out sessions and tech you don't usually work on.

What are YOUR suggestions? Sound off in the comments.


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now!



© 2017 Scott Hanselman. All rights reserved.
     

Exploring the preconfigured browser-based Linux Cloud Shell built into the Azure Portal

$
0
0

At BUILD a few weeks ago I did a demo of the Azure Cloud Shell, now in preview. It's pretty fab and it's built into the Azure Portal and lives in your browser. You don't have to do anything, it's just there whenever you need it. I'm trying to convince them to enable "Quake Mode" so it would pop-up when you click ~ but they never listen to me. ;)

Animated Gif of the Azure Cloud Shell

Click the >_ shell icon in the top toolbar at http://portal.azure.com. The very first time you launch the Azure Cloud Shell it will ask you where it wants your $home directory files to be persisted. They will live in your own Storage Account. Don't worry about cost, remember that Azure Storage is like pennies a gig, so assuming you're storing script files, figure it's thousandths of pennies - a non-issue.

Where do you want your account files persisted to?

It's pretty genius how it works, actually. Since you can setup an Azure Storage Account as a regular File Share (sharing to Mac, Linux, or Windows) it will just make a file share and mount it. The data you save in the ~/clouddrive is persistent between sessions, the sessions themselves disappear if you don't use them.

Now my Azure Cloud Shell Files are available anywhere

Today it's got bash inside a real container. Here's what lsb_release -a says:

scott@Azure:~/clouddrive$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.1 LTS
Release:        16.04
Codename:       xenial

Looks like Ubuntu xenial inside a container, all managed by an orchestrator within Azure Container Services. The shell is using xterm.js to make it all possible inside the browser. That means you can run vim, top, whatever makes you happy. Cloud shells include vim, emacs, npm, make, maven, pip, as well as docker, kubectl, sqlcmd, postgres, mysql, iPython, and even .NET Core's command line SDK.

NOTE: Ctrl-v and Ctrl-c do not function as copy/paste on Windows machines [in the Portal using xterm.js], please us Ctrl-insert and Shift-insert to copy/paste. Right-click copy paste options are also available, however this is subject to browser-specific clipboard access

When you're in there, of course the best part is that you can ssh into your Linux VMs. They say PowerShell is coming soon to the Cloud Shell so you'll be able to remote Powershell in to Windows boxes, I assume.

The Cloud Shell has the Azure CLI (command line interface) built in and pre-configured and logged in. So I can hit the shell then (for example) get a list of my web apps, and restart one. Here I'm getting the names of my sites and their resource groups, then restarting my son's hamster blog.

scott@Azure:~/clouddrive$ az webapp list -o table
ResourceGroup               Location          State    DefaultHostName                             AppServicePlan     Name
--------------------------  ----------------  -------  ------------------------------------------  -----------------  ------------------------
Default-Web-WestUS          West US           Running  thisdeveloperslife.azurewebsites.net        DefaultServerFarm  thisdeveloperslife
Default-Web-WestUS          West US           Running  hanselmanlyncrelay.azurewebsites.net        DefaultServerFarm  hanselmanlyncrelay
Default-Web-WestUS          West US           Running  myhamsterblog.azurewebsites.net             DefaultServerFarm  myhamsterblog


scott@Azure:~/clouddrive$ az webapp restart -n myhamsterblog -g "Default-Web-WestUS"

Pretty cool. I'm going to keep exploring, but I like the way the Azure Portal is going from a GUI and DevOps dashboard perspective, but it's also nice to have a CLI preconfigured whenever I need it.


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now!


© 2017 Scott Hanselman. All rights reserved.
     

Choice amongst cross-platform .NET IDEs - VS Code, Visual Studio for Mac, JetBrains Rider

$
0
0

A few years back, .NET development on a Mac was resigned to Mono and whatever text editor you knew how to exit successfully. Xamarin Studio came out in 2013 as a standalone IDE for mobile app development, but wasn't a generalized or web development IDE. Later the OmniSharp OSS project came along and added intellisense to a half-dozen editors with its smart out of process intellisense server but these code editors with .NET specific features, not strictly IDEs.

Side Note: I've been writing this blog post on and off for a while. Coincidentally JetBrains Rider is sponsoring my blog this week. It's a coincidence, but I want to be transparent about it as I don't do sponsored/directed blog posts - rather, folks sponsor a calendar week.

Fast forward a bit and we've got some choices amongst cross-platform .NET development on non-Windows platforms.

Visual Studio Code

First, there's Visual Studio Code (more of a code editor, but with a TON of plugins and extensions) that is a very competent editor for .NET on Mac or Linux. It's also one of the best node.js editors/debuggers anywhere - nice if you're working on multi-language projects.

Visual Studio Code

If you look in the lower-right corner there in Visual Studio Code you can see the OmniSharp flame logo in the corner, helping power the C# Extension for Visual Studio Code. For ASP.NET Core web developers, VS Code is pretty good, although its lack of support for Razor Views/Pages remains a hole. You don't get intellisense for your C# when you open a code block like @{ } in a Razor View. That said, there are a bunch of extensions that add snippets for dozens (hundreds?) of languages, syntax highlighting for basically everything, and it's all built on an open source base of TypeScript. VS Code supports git natively as well.

JetBrains Rider

Currently in "EAP," that's  Early Access Program/Preview, or beta for the rest of us, JetBrains Rider runs on Windows, Mac, and Linux and lets you manage and build .NET Framework, Mono, and .NET Core solutions. Rider supports C#, VB.NET, ASP.NET syntax, XAML, XML, JavaScript, TypeScript, JSON, HTML, CSS, and SQL within its text editor.

Rider has the smart editor and the 50+ refactorings that fans of ReSharper will appreciate, with lots of choice amongst key-binding. You can tell Rider if you prefer ReSharper, VS, Eclipse, or NetBeans key bindings. It does a ton of custom code analysis and can refactor and analyze your code while you type. It's also got a built in decompiler for exploring libraries you don't have the source for.

Rider also supports Git, Subversion, Mercurial, Perforce and TFS out of the box and can add more source systems via plugins.

JetBrains Rider
>

Visual Studio for Mac

VS for Mac is new and while it started as Xamarin Studio, there's been a ton of additions to it according to Miguel de Icaza. In the feature, VS for Mac will share the exact same core editor code that Visual Studio for Windows uses for its text editors like HTML, Razor, CSS and more. One of the things I like the most about Visual Studio for Mac is that it looks like Visual Studio...FOR MAC. By that I mean it doesn't look like Visual Studio on Windows copy-pasted onto the Mac. It has a Mac UI, Mac Icons, a Mac look and feel. Much like Office for Mac, it's a native app that smells native because it is.

Visual Studio for Mac

The release of VS for Mac includes support for ASP.NET Core and .NET Core. Like all these IDEs and editors, it shares csproj and sln files cleanly with Visual Studio for Windows. That means that you can easily share projects and code with some folks on Mac and some on Windows.

Visual Studio for Mac is best when used for these scenarios:

  • Mobile development with Xamarin
  • Cloud development with .NET Core and ASP.NET Core, and publishing to Azure
  • Web development with ASP.NET Core and web editor tooling

For example, when you make a new Mobile app in C#, you can get an ASP.NET Core backend along with it. Then you can easily publish the backend to Azure at the same time you push your app onto Android or iPhone.

Finally, one of the coolest features for mobile developers on Visual Studio for Mac is the "Xamarin Live Player." This allows you to pair your instance of Visual Studio with your development phone and do continuous development and testing. As you make changes in Visual Studio, the changes are immediately visible in the Live Player - no need to redeploy. That feature is in preview as of the time of this writing.

If you're developing but you're not on Windows, there's never been a better time to develop cross-platform with .NET Core. Check out each of these:

Have you tried these out? What have you found?


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!



© 2017 Scott Hanselman. All rights reserved.
     

Visual Studio and IIS Error: Specified argument was out of the range of valid values. Parameter name: site

$
0
0

I got a very obscure and obtuse error running an ASP.NET application under Visual Studio and IIS Express recently. I'm running a Windows 10 Insiders (Fast Ring) build, so it's likely an issue with that, but since I was able to resolve the issue simply, I figured I'd blog it for google posterity .

I would run the ASP.NET app from within Visual Studio and get this totally useless error. It was happening VERY early in the bootstrapping process and NOT in my application. It pretty clearly is happening somewhere in the depths of IIS Express, perhaps in a configurator in HttpRuntime.

Specified argument was out of the range of valid values.
Parameter name: site

I fixed it by going to Windows Features and installing "IIS Hostable Web Core," part of Internet Information Services. I did this in an attempt to "fix whatever's wrong with IIS Express."

Turn Windows Features on or off

That seems to "repair" IIS Express. I'll update this post if I learn more, but hopefully if you got to this post, this fixed it for you also.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     

.NET and Docker

$
0
0

“Container Ship” by NOAA's National Ocean Service is licensed under CC BY 2.0.NET and .NET Core (and Windows!) have been getting better and better with Docker. I run Docker for Windows as it supports both Linux Containers and Windows Containers. They have both a Stable and Edge channel. The Edge (Beta) channel is regularly updated and, as a rule, gets better and better in the year I've been running it.

As a slightly unrelated side note, I'm also running Docker on my Synology NAS with a number of containers, as well as .NET Core (my Nas is an Intel chip), Minecraft Server, Plex Server, and CrashPlan.

NOTE: Docker for Windows requires 64bit Windows 10 Pro and Microsoft Hyper-V. Please see What to know before you install for a full list of prerequisites.

The .NET Team at Microsoft has been getting their dockerfiles in order and organized. It can seem initially the opposite, with lots of cryptic tags and names, but there's a clear method you can read about here.

They publish their Docker images in a few different repositories on Docker Hub. It’s important to segment images so that they are easier to find, both on the Docker Hub website as well as with the docker search command.

There's also some samples at:

The samples are super easy to try out - STOP READING AND TRY THIS NOW. ;)

I'm always impressed with a nice asynchronous ASCII Progress bar. I'm easy to impress. This is a "hello world" sample with a surprise ASCII art. I won't spoil for you.

C:\Users\scott\Desktop> docker run microsoft/dotnet-samples

Unable to find image 'microsoft/dotnet-samples:latest' locally
latest: Pulling from microsoft/dotnet-samples
10a267c67f42: Downloading [========> ] 9.19MB/52.58MB
7e1a7ec87c21: Downloading [======================> ] 10.8MB/18.59MB
923d0cd2ed37: Download complete
7c523004cf83: Downloading [=========> ] 6.144MB/33.07MB
f3582118a43a: Waiting
c27ef6b597a0: Waiting

All the images are managed and maintained on GitHub so you can get involved if you're not digging the images or files.

One interesting thing to point out is the difference between dev images and production images, as well as images you'd use in CI/CD (Build Server) situations to build other images. Here are some examples from GitHub:

Development

  • dotnetapp-dev - This sample is good for development and building since it relies on the .NET Core SDK image. It performs dotnet commands on your behalf, reducing the time it takes to create Docker images (assuming you make changes and then test them in a container, iteratively).

Production

  • dotnetapp-prod - This sample is good for production since it relies on the .NET Core Runtime image, not the larger .NET Core SDK image. Most apps only need the runtime, reducing the size of your application image.
  • dotnetapp-selfcontained - This sample is also good for production scenarios since it relies on an operating system image (without .NET Core). Self-contained .NET Core apps include .NET Core as part of the app and not as a centrally installed component in a base image.
  • dotnetapp-current - This sample demonstrates how to configure an application to use the .NET Core 1.1 image. Both the .csproj and the Dockerfile have been updated to depend on .NET Core 1.1. This sample is the same as dotnetapp-prod with the exception of relying on a later .NET Core version.
  • aspnetapp - This samples demonstrates a Dockerized ASP.NET Core Web App

There's great Docker support in VS Code, Visual Studio 2017, and Visual Studio for Mac (the Preview channel). With VS and Docker on Windows you can even F5 (debug) into a Linux Container.

Some of you may have .NET Framework apps running in Virtual Machines that you'd love to get moved over to a container infrastructure. There's a tool called Image2Docker that Docker maintains that might help. It helps migrate VMs to Containers. Check out the Image2Docker DockerCon talk or read Docker’s Convert ASP.NET Web Servers To Docker with ImageDocker to learn more.

“Container Ship” by NOAA's National Ocean Service is licensed under CC BY 2.0


Sponsor: Check out Seq: simple centralized logging, on your infrastructure, with great support for ASP.NET Core and Serilog. Download version 4.0.



© 2017 Scott Hanselman. All rights reserved.
     

RetroPie and X-Arcade Tankstick - The perfect Retro Arcade (plus keybindings and config and how-to)

$
0
0

Eight years ago I stumbled on the husk of an old arcade cabinet and along with my buddy John Batdorf, proceeded to reclaim the cabinet, refinish, paint, and turn it into a proper MAME (Multi-Arcade Machine Emulator)

As an aside, a bit after helping me this project, John happened to start an amazing business making furniture with reclaimed wood, check him out at http://deercreekfurnishings.com. Amazing stuff, truly.

X-ARCADE TANKSTICK + TRACKBALL: WITH USB

Last week I build a RetroPie into an X-arcade tankstick. This is my best retro arcade yet because it's got HDMI out and I can take it to friends' houses. That said, I'm going to briefly go over my other systems because they may be more attractive for your needs. If you have no patience, scroll down.

A full size MAME Cabinet - The Complete MAME Cabinet How-To

I wrote up a complete 7 part series on making your own MAME Arcade Cabinet. It's super fun and will only take a few weekends and perhaps a few hundred bucks.

  1. Cabinet and Power
  2. Monitor and Mounting
  3. Control Panel
  4. Sound and Lights
  5. Paint and Art
  6. Computer Hardware and Software
  7. Success and Conclusion

When I made my first MAME cabinet I put a small "Shuttle PC" inside. The MAME system is in my office and runs to this day on Windows 7 with a HyperSpin frontend.

Software Disclaimer 1: There's all sorts of iffy legal issues around emulating arcade games with boards/ROMs you don't own. This series of posts has nothing to do with that. I do own some original arcade boards, but if you want to emulate arcade games with MAME (Multiple Arcade Machine Emulator), you can search the 'tubes. What I'm doing here is putting a computer in a pretty box.

Hardware Disclaimer 2: Many folks that build arcade cabinets have a purist view of how these things should be done. They will prefer original Arcade CRT monitors and more expensive, higher quality parts. I am more of a pragmatist. I also have no idea what I'm doing, so I've also got ignorance on my side.

There's been a huge amount of work done in the last few years to reconcile the dozens of emulators and systems and the nightmare of keybindings, menus, and configuration. My first MAME machine was a few hours to install and literally weeks of messing around with the settings of various emulators. I started with the legendary v1 "X-Arcade Tankstick" that had was effectively a PS2 keyboard. I took it apart and built it into my MAME system's control panel. I then needed to tell each individual emulator the key codes for up, down, left, right, a, b, x, y, etc. Each emulator had a different configuration file. Some were INI files, some XML, some freaking magic.

It's a lot to ask in 2017 to dedicate a complete PC to a retroarcade - in fact, it's just not necessary. A $35 Raspberry Pi 3 (or even an overclocked Raspberry Pi 2) has enough power to handle all but the most complex emulators.

Tiny Raspberry Pi Powered "CupCade"

Later I discovered RetroPie and built a tiny "cupcade" with plans from AdaFruit.  It is/was a tiny little thing that with just a basic menuing system but it got me thinking about how powerful the Raspberry Pi is. The AdaFruit site has all the plans and parts you can buy. I had a local makerspace laser-cut the case. Assembly was just a weekend.

AdaFruit's CupCade

Hyperkin - An off-the shelf RetroArcade Console

We also picked up a Hyperkin Retron console. This is a great legal way to plan retro games because it requires actual cartridges. We buy our games at Retro Game Trader. If you are EVER near Portland you HAVE to stop and check it out. It's insane.

There's a old joke about building a retro arcade machines - Is it more fun to play retro arcade games, or is it more fun to build a retro arcade machine with a cool front-end where every keybinding works in every emulator but you never get around to playing games?

A RetroPie inside an X-Arcade Tankstick

There's a whole series of gotchas that took me a few weeks to work through when taking a Raspberry Pi, RetroPie software, and an X-Arcade and getting them to work well together.

THIS blog post is going to be a collection of all the stuff I wish I'd known BEFORE I started on this path. Even one of these tips would have saved me an hour, so the collection of them is days of googling, forum reading, and trial and error.

Start with this 7-part short video series (they are less than 10 min long, so it's not so scary) on the X-Arcade with the RetroPie.

Parts List

You'll want at least these things to start.

  • Raspberry Pi 3 - Don't skimp, get a 3. Yes you can use a 2, but you'll be far happier with a 3.
  • Raspberry Pi 3 Heatsink Set - Raspberry Pi's can be persnickety. Spend the $5 and get a heatsink.
  •  128gig high-speed microSDXC card - Get the largest and fastest microSD card you can get. Class 10 is ideal.
  • 2amp+ powersupply with a 5 foot microUSB cable. Make sure your powersupply does at LEAST 2 amps. Less and your Raspberry Pi may not boot up with keyboards or mice attached.
    • Remember that the goal here is to be able to plug this into your TV while you're sitting near or on your couch. You might even want a longer cable.
    • Make sure the microUSB powersupply cable length matches your HMDI cable length. You're only as useful as the shortest cable between these two.
  • PS2 keyboard - Yes, PS2. I picked one up at Goodwill or a local Thrift Shop. You'll need this to program the X-Arcade Tankstick. You change its mode switch, press a button on the controller while simultaneously pressing a key on the PS2 keyboard. You'll repeat this until all your keys are set.
  • Also, you can never have too many cable ties.

And finally, last but not least. An X-Arcade Stick. You can get them with or without a Trackball (which acts as an independent mouse and uses its own additional USB cable). As I mentioned, I'd long-been a fan of all X-Arcade products. Their stuff is legendarily reliable and built like, well, a tank. They're fantastic in that you can even get adapters for your X-Box, X-Box One, Wii, Dreamcast, whatever.

My brother recently found an X-Arcade stick at a local thrift story for $30 and grabbed it for me. I opened it up and noticed it was the PS2 version from years ago. Fear not - you should be aware that there is the PS2 X-Arcade that requires a PS2 keyboard be attached, and there's the newer USB version.  Here's the epic part - and reason #564 why I love X-Gaming as a company - you can upgrade the electronics in your v1 X-Arcade stick with a simple board for $35. And I did just this. This kit takes any existing X-Arcade to the latest hardware and you're going to want the latest if you want your X-Arcade to work smoothly with RetroPie.

I took the back off the X-Arcade and threaded the HDMI cable and USB micro cable through the back holes. I 3D-printed a case (the yellow cage in the photo below) for the Raspberry Pi but really any case will do as long as wires aren't touching wires. There's an RS232 cable and the vestigial green PS2 male that you can tuck away in there. I used the remaining hole to keep the purple PS2 female connector handy as it'll be used for "programming" the keys for the X-Arcade.

Hey it's a Raspberry Pi 3 shoved into an X-Arcade. That's not very sophisitcated.

Yes, it's janky, but all I had was electrical tape. Ideally I'd get a rubber gasket for the wires to keep the tension off the Raspberry Pi and make it more "kid safe."

Photo Jun 06, 11 46 29 PM

Again, follow these videos. If you're a little technical it's pretty straightforward stuff. The general idea is this.

  • The Raspberry Pi uses the SD Card as its hard drive.
  • The X-Arcade is a keyboard and you'll have the PS2 keyboard temporarily plugged into it for setup.
  • The Raspberry Pi 3 is best not only because it's fast but it's also got built-in WiFi. If you use a Raspberry Pi 2, you'll need a Wifi adapter.
  • With your computer, you will use Win32DiskImager to copy a pre-made image of RetroPie to the SD Card.
  • You'll put the SD Card into the Pi, connect the X-Arcade via USB to the Pi, connect the PS2 keyboard to the X-Arcade , connect the Pi's HDMI to a monitor or TV, connect the power, and boot up.
  • You'll follow some on-screen prompts (again, see the videos) and setup RetroPi.
  • You'll program the X-Arcade to act as a keyboard.
  • Then you'll see what works and start debugging.
    • Debugging often consists of using putty and/or Bash for Windows to ssh into the Raspberry Pi. The user name is pi and the password is raspberry so that's usually "ssh pi@retropie" then the password.

Little Gotchas when Hooking up RetroPie and an X-Arcade

Now to the little details that took me weeks that will hopefully help you.

  • Xarcade2Jstick vs standard keyboard mapping - Some people swear that the X-Arcade stick will/can get detected as a joystick using a user-space driver called Xarcade2Jstick. This driver is built into Retropie now and it takes your keyboard/xarcade and "lies" to the system and makes it look like two gamepads. Some folks swear by it. I fought with it for a week and decided that since I understand keyboards, I would just stick with keyboard mapping. Your mileage may vary, but the good thing to know (and try) is that if your system "just works" when you boot up, then perhaps Xarcade2jstick worked amazingly for you and you can skip a LOT of this mess. Sound off in the comments.
    The gent who made the videos also believes that keyboard mapping is more reliable and recommends this "non-standard" set up and programs it in "bank 2" of the X-Arcade. That means the toggle switch is in the second position inward, away from the serial point when you program it. He recommends this layout and I've used it also. This is a screenshot from his video.
    Keyboard mappings are in the linked to Zip file
  • NOTE: I needed to go into optional components in RetroPie setup and specifically disable xarcade2jstick. You can re-run RetroPie-setup from the command line as often as you like.
    cd RetroPie-Setup
    sudo ./retropie_setup.sh
  • Keyboard Bindings for RetroArch compliant emulatiors - Now, I think I understand this, but if I get it wrong, let me know in the comments. There is an organization called "libRetro" that comprises the libRetro library, the RetroArch frontend that runs libRetro programms, and Lakka, a Linux that's meant for retroarcades. You don't need to sweat Lakka as you used a default RetroPie image. But RetroArch you'll be hearing a lot about. Remember earlier when I was complaining about all the trouble configuring emulators? RetroArch has scoped, nested config files (with includes) that allow you specify your config and keyboard/gamepad/joystick mapping once and then participating emulators will "just work." 
    Another way to look at it is this. In the past you needed lots of emulator programs from lots of people with lots of config that was all different. Retroarch tries to unify all of this so there's "cores" for each emulated system that Retorarch calls out to for the emulation.
    Follow the videos, but you'll basically go to /opt/retropie/configs/all and edit retroarch.cfg to support the keymapping above. MOST of the emulators will pick these settings up. But not all. More on that in a second.

    Like this:

    input_player1_a = t
    input_player1_b = r
    input_player1_y = q
    input_player1_x = w
    input_player1_start = num1
    input_player1_select = num5
    input_player1_l = e
    input_player1_r = y
    input_player1_left = left
    input_player1_right = right
    input_player1_up = up
    input_player1_down = down
    input_player1_l2 = u
    input_player1_r2 = i
    input_player1_l3 = nul
    input_player1_r3 = nul

    input_player2_a = j
    input_player2_b = h
    input_player2_y = d
    input_player2_x = f
    input_player2_start = num2
    input_player2_select = num6
    input_player2_l = g
    input_player2_r = k
    input_player2_left = a
    input_player2_right = s
    input_player2_up = o
    input_player2_down = p
    input_player2_l2 = l
    input_player2_r2 = z
    input_player2_l3 = nul
    input_player2_r3 = nul

  • Exiting Games with the Xarcade controller - One of the most common questions I saw in the forums was "I can move around in the menus and launch and emulator but I can't exit it!" Folks were forced to pull the plug and hard reboot which isn't a sustainable solution. The xarcade has a "flipper" (imagine a pinball flipper's controlling button position) button on each side. The standard hotkey for exiting an emulator has historically been pressing the Player 1 start button PLUS the left flipper button. That's the 1 and 5 keys together if you look at the diagram above.
    You'll want to go to /opt/retropie/configs/all and edit retroarch.cfg and confirm that you have these lines somewhere:
    input_enable_hotkey = num1
    input_exit_emulator = num5
    Then launch emulationstation (or reboot), launch an emulator, and press P1 and left bumper/flipper. You'll also come to know the left and right flipper buttons as the virtual "insert coin" buttons for Player 1 (P1) and Player 2 (P2) respectively.
  • Some emulators don't listen to RetroArch settings - Depending on the RetroPie image you downloaded, you may find that some emulators don't listen or respect your core retroarch.cfg settings. Or, perhaps the defaults buttons don't feel right. For example, Sega controller buttons are two rows of three buttons each. You can override your settings to make your xarcade more intuitive. Go to /opt/retropie/configs/megadrive and edit the retroarch.cfg in there. Note that it includes the MAIN "all" retroarch so you're just overriding some settings on an emulator by emulator basis.

    # Settings made here will only override settings in the global retroarch.cfg if placed above the #include line

    input_player1_a = y
    input_player1_b = t
    input_player1_y = r
    input_player1_x = w
    input_player1_l = q
    input_player1_r = e

    input_player2_a = k
    input_player2_b = j
    input_player2_y = h
    input_player2_x = f
    input_player2_l = d
    input_player2_r = g

    input_remapping_directory = /opt/retropie/configs/megadrive/

    #include "/opt/retropie/configs/all/retroarch.cfg"

  • MAME isn't working at all with the Xarcade - This hit me and I see lots of folks struggling. If the MAME core/emulator you're using doesn't integrate with RetroArch, you may need to manually keymap within MAME itself. Use the attached keyboard (while it's still attached) and when inside MAME press Tab. You'll go into the "Input (general) and go down the line one at a time and remap the keys. It's NOT obvious that you have to press and hold the buttons on your XArcade before MAME will pick up the new mapping. It's also not obvious that if you press AGAIN and hold that you can tell MAME another alternate key. In other words, the "OR" key. As in "1 OR 5" if you like.
    You might like to know that mame-advmame stores these configurations in /opt/retropie/configs/mame-advmame in *.rc files. For example, I had advmame-0.94.0.rc and wanted to be able to exit MAME from my xarcade. If I had a keyboard attached, I'd press "ESC" but with the Xarcade I wanted "Player 1 plus Left Flipper" to work. Then I wanted either "Enter" to confirm, or the main button for Player 1. I ended up with this. Again, this is for non-RetroArch advmame, but it makes the larger point in case you run into these kinds of emulators.

    input_map[ui_pause] keyboard[0,enter] or keyboard[0,tab] keyboard[0,up]
    input_map[ui_select] keyboard[0,enter] or keyboard[0,q]
    input_map[ui_cancel] keyboard[0,5] keyboard[0,1] or keyboard[0,esc]

That pretty much covers all the hairpullingout of the last few weeks. The result is very nice though. I hope you make one also!

Photo Jun 07, 12 58 22 AM


Sponsor: Check out Seq: simple centralized logging, on your infrastructure, with great support for ASP.NET Core and Serilog. Download version 4.0.


© 2017 Scott Hanselman. All rights reserved.
     

Trying .NET Core on Linux with just a tarball (without apt-get)

$
0
0

There's a great post on the .NET Blog about the crazy Performance Improvements in .NET Core that ended up on Hacker News. The top comment on HN is a great one that points out that the http://dot.net  website could be simpler, that it could be a one-pager with a clearer Getting Started experience.

They also said this:

Also, have a simple downloadable .tar.gz which expands into /bin + /lib + /examples. I loved C# back in my Windows days and I moved to Linux to escape Microsoft complexities and over-reliance on complex IDEs and tools, scattered like shrapnel all over my c:/

I will not run apt-get against your repo without knowing ahead of time what I'm getting and where will it all go, so let me play with the tarball first.

This is a great point, and we're going to look at revamping and simplifying the http://dot.net/core with this in mind in the next few weeks. They're saying that the Linux instructions, like these instructions on installing .NET Core on Ubuntu for example, make you trust a 3rd party apt repro and apt-get .NET, while they want a more non-committal option. This gets to the larger "the website is getting bigger than it needs to be and confusing" point.

.NET Core from a tarbar on Linux

Trying out .NET Core from a tarball

Go to https://www.microsoft.com/net/download/linux and download the .tar.gz for your distro to a nice local area.

NOTE: You MAY need to apt-get install libunwind8 if you get an error like "Failed to load /home/ubuntu/teste-dotnet-rc2/libcoreclr.so, error: libunwind.so.8: cannot open shared object file: No such file or directory" but libunwind isn't very controversial.

Once you've unziped/tar'd it into a local folder, just be sure to run dotnet from that folder.

Desktop $ mkdir dotnetlinux

Desktop $ cd dotnetlinux/
dotnetlinux $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
dotnetlinux $ curl -o dotnet.tar.gz https://download.microsoft.com/download/E/7/8/E782433E-7737-4E6C-BFBF-290A0A81C3D7/dotnet-dev-ub
untu.16.04-x64.1.0.4.tar.gz
dotnetlinux $ tar -xvf dotnet.tar.gz
dotnetlinux $ cd /mnt/c/Users/scott/Desktop/localdotnettest/
localdotnettest $ ../dotnetlinux/dotnet new console
Content generation time: 103.842 ms
The template "Console Application" created successfully.
localdotnettest $ ../dotnetlinux/dotnet restore
Restoring packages for /mnt/c/Users/scott/Desktop/localdotnettest/localdotnettest.csproj...
localdotnettest $ ../dotnetlinux/dotnet run
Hello World!

There aren't samples in this tar file (yet) but there are (some weak) samples at https://github.com/dotnet/core/tree/master/samples you can clone https://github.com/dotnet/core.git and run them from samples. Note from the ReadMe that https://github.com/dotnet/core is the jumping off point for the other repos.

The more interesting "samples" are the templates you have available to you from "dotnet new."

localdotnettest $ /mnt/c/Users/scott/Desktop/dotnetlinux/dotnet new

*SNIP*

Templates Short Name Language Tags
----------------------------------------------------------------------
Console Application console [C#], F# Common/Console
Class library classlib [C#], F# Common/Library
Unit Test Project mstest [C#], F# Test/MSTest
xUnit Test Project xunit [C#], F# Test/xUnit
ASP.NET Core Empty web [C#] Web/Empty
ASP.NET Core Web App mvc [C#], F# Web/MVC
ASP.NET Core Web API webapi [C#] Web/WebAPI
Solution File sln Solution

Examples:
dotnet new mvc --auth None --framework netcoreapp1.1
dotnet new classlib
dotnet new --help

From here you can "dotnet new web" or "dotnet new console" using your local dotnet before you decide to commit to installing .NET Core from an apt repo or yum or whatever.


Sponsor: Check out Seq: simple centralized logging, on your infrastructure, with great support for ASP.NET Core and Serilog. Download version 4.0.


© 2017 Scott Hanselman. All rights reserved.
     

LLBLGen Pro for .NET and .NET Core - Database Entity Modeling with any ORM

$
0
0

There's opinionated frameworks, and then there's opinionated frameworks that also respect your opinion. LLBLGen is one of those. For many years it's been a great entity modeling tool as well as an excellent ORM (Object Relational Mapper.) It also supports all major ORMs in the .NET space like Entity Framework, NHibernate, Linq to Sql as well as, of course, their own included LLBLGen Pro Runtime Framework. It works with VS2015 and VS2017 and is actively supported and extremely actively developed. It's because of that active development that I wanted to check it out. It's got Getting Started videos and a TON of docs, so I figured I could do some damage pretty quickly with a 30 day trial.

NOTE: Just a reminder, I don't do sponsored posts for software. I just felt like checking out LLBLGen because it's been a few years since I looked at it least. All my observations are my own, unfiltered, as I know you like them, Dear Reader.

You can do Database First - a technique that is crucial for so many of us with existing databases but often downplayed with other ORMs - as well as Model First and then generate classes.

I decided to start with one of the newer SQL Server 2016 sample databases called Worldwide Importers. There's localdb versions, Azure SQL Database versions, and SQL Server 2016 backups. I made a database in Azure, uploaded a "bacpac" file to Azure storage, and imported the database into SQL Azure. Although I certainly could have done the work locally, I can get more horsepower in the cloud.

When I make a new Project in the LLBLGen GUI I can pick from a ton of different ORMs including 5 (!) versions of Entity Framework including EFCore, as well as NHibernate 4v and Linq to SQL (which is a nice touch as I have two L2S projects still in production.)

LLBLGen supports a bunch of ORMs

The WorldWide Importers sample is a nice one as it's typical and non-trivial in complexity. I pointed LLBLGen at it and let it rip. Make sure you wait until your database is totally restored into SQL Azure or your SQL Server or you may get weird errors about Zombie Transactions.

LLBLGen chewing on the DB

When it's done, you'll get an Errors & Warnings pane that will tell you how many stored procs, tables, views, etc that were imported, and that they are "unmapped," which is cool since you haven't mapped them.

Smart Errors in LLBLGen

You can switch your Target ORM Framework after you've imported your Data Model, but you really should put a little thought into how your database is structured and whether or not your preferred ORM supports all the features you (may) have used heavily in your Database. For example, if you're a very "stored proc"-style shop, it would be a problem if you really wanted to use an ORM that didn't support stored procs.

LLBLGen is rather extraordinary in that it not only has smarts about what's possible and what's not, but it also offers you a multiple-choice solution framework when something is wrong. For example, there's a mapping here that isn't support, so it's offering me three options to fix it, including (of course) changing the offending entity by changing/adding fields.

LLBLGen offers multiple fixes and can do them right there in the Errors pane

Once you have a valid model and have corrected any issues and/or made appropriate changes, you can Generate Source Code for your target platform, language, and ORM Framework.

Generating Code with LLBLGen

Make no mistake about it - there's a LOT of depth here. There's multiple kinds of templates and tons of options. You may not get it all right on the first try, but it's very forgiving. Just remember where the authoritative source of truth is. Is your model the truth? Or your database? As you move forward (depending on where you started) your source of truth will likely change. You can use any of the many code generators or expand them with your own modifications and metadata.

You'll also likely get addicted to the nice visual editors for entities (a good thing!).

LLBLGen Visual Editors

Quick Model is also nice if you want to visualize (and change) relationships between just a few of your many tables.

LLBLGen Visual Designer

If you get fast enough, with practice you can use the Quick Model editor and it's Command Input palette to model most of a new database when interviewing domain experts. The visual designer is fast and flexible.

I've truly barely scratched the surface of this deep tool. The pricing is very reasonable considering all it does.

Have you used LLBLGen or similar tools lately? What's been your impression?


Sponsor: Big thanks to Raygun! Don't rely on your users to report the problems they experience. Automatically detect, diagnose and understand the root cause of errors, crashes and performance issues in your web and mobile apps. Learn more.



© 2017 Scott Hanselman. All rights reserved.
     

How to reference a .NET Core library in WinForms - Or, .NET Standard Explained

$
0
0

I got an interesting email today. The author said "I have a problem consuming a .net core class library in a winforms project and can't seem to find a solution." This was interesting for a few reasons. First, it's solvable, second, it's common, and third, it's a good opportunity to clear a few things up with a good example.

To start, I emailed back with "precision questioning." I needed to assert my assumptions and get a few very specific details to make sure this was, in fact, possible. I said. "What library are you trying to use? What versions of each side (core and winforms)? What VS version?"

The answer was "I am working with VS2017. The class library is on NETCoreApp 1.1 and the app is a Winforms project on .NET Framework 4.6.2."

Cool! Let's solve it.

Referencing a .NET Core library from WinForms (running .NET Full Framework)

Before we parse this question. Let's level-set.

.NET is this big name. It's the name for the whole ecosystem, but it's overloaded in such a way that someone can say "I'm using .NET" and you only have a general idea of what that means. Are you using it on mobile? in docker? on windows?

Let's consider that ".NET" as a name is overloaded and note that there are a few "instances of .NET"

  • .NET (full) Framework - Ships with Windows. Runs ASP.NET, WPF, WinForms, and a TON of apps on Windows. Lots of businesses depend on it and have for a decade. Super powerful. Non-technical parent maybe downloads it if they want to run paint.net or a game.
  • .NET Core - Small, fast, open source, and cross-platform. Runs not only on Windows but also Mac and a dozen flavors of Linux.
  • Xamarin/Mono/Unity - The .NET that makes it possible to write apps in C# or F# and run them everything from an iPad to cheap Android phone to a Nintendo Switch.

All of these runtimes are .NET. If you learn C# or F# or VB, you're a .NET Programmer. If you do a little research and google around you can write code for Windows, Mac, Linux, Xbox, Playstation, Raspberry Pi, Android, iOS, and on and on. You can run apps on Azure, GCP, AWS - anywhere.

What's .NET Standard?

.NET Standard isn't a runtime. It's not something you can install. It's not an "instance of .NET."  .NET Standard is an interface - a versioned list of APIs that you can call. Each newer version of .NET Standard adds more APIs but leaves older platforms/operating systems behind.

The runtimes then implement this standard. If someone comes out with a new .NET that runs on a device I've never heard of, BUT it "implements .NET Standard" then I just learned I can write code for it. I can even use my existing .NET Standard libraries. You can see the full spread of .NET Standard versions to supported runtimes in this table.

Now, you could target a runtime - a specific .NET - or you can be more flexible and target .NET Standard. Why lock yourself down to a single operating system or specific version of .NET? Why not target a list of APIs that are supported on a ton of platforms?

The person who emailed me wanted to "run a .NET Core Library on WinForms." Tease that apart that statement. What they really want is to reuse code - a dll/library specifically.

When you make a new library in Visual Studio 2017 you get these choices. If you're making a brand new library that you might want to use in more than one place, you'll almost always want to choose .NET Standard.

.NET Standard isn't a runtime or a platform. It's not an operating system choice. .NET Standard is a bunch of APIs.

Pick .NET Standard

Next, check properties and decide what version of .NET Standard you need.

What version of .NET Standard?

The .NET Core docs are really quite good, and the API browser is awesome. You can find them at https://docs.microsoft.com/dotnet/ 

The API browser has all the .NET Standard APIs versioned. You can put the version in the URL if you like, or use this nice interface. https://docs.microsoft.com/en-us/dotnet/api/?view=netstandard-2.0

API Browser

You can check out .NET Standard 1.6, for example, and see all the namespaces and methods it supports. It works on Windows 10, .NET Framework 4.6.1 and more. If you need to make a library that works on Windows 8 or an older .NET Framework like 4.5, you'll need to choose a lower .NET Standard version. The table of supported platforms is here.

From the docs - When choosing a .NET Standard version, you should consider this trade-off:

  • The higher the version, the more APIs are available to you.
  • The lower the version, the more platforms implement it.

In general, we recommend you to target the lowest version of .NET Standard possible. The goal here is reuse. You can also check out the Portability Analyzer and run it on your existing libraries to see if the APIs you need are available.

.NET Portability Analyzer

.NET Standard is what you target for your libraries, and the apps that USE your library target a platform.

Diagram showing .NET Framework, Core, and Mono sitting on top the base of .NET Standard

I emailed them back briefly, "Try making the library netstandard instead."

They emailed back just a short email, "Yes! That did the trick!"


Sponsor: Big thanks to Raygun! Don't rely on your users to report the problems they experience. Automatically detect, diagnose and understand the root cause of errors, crashes and performance issues in your web and mobile apps. Learn more.


© 2017 Scott Hanselman. All rights reserved.
     

Get Solarized - Awesome command prompt colors for VS, VS Code, cmd, PowerShell, and more

$
0
0

imageI was on a call with my co-worker Maria today and she commented on how nice my command prompt in Windows looked. I told it was "Solarized" and then our conference call fell apart as we collected all kinds of fun info about how you can get Solarized in your favorite apps on Windows.

Solarized is a sixteen color palette (eight monotones, eight accent colors) designed for use with terminal and gui applications. It's by Ethan Schoonover and it's spread all over the web. You can see screenshots and learn about it on GitHub.

Solarized for your Windows Command Prompt (cmd, powershell, bash)

By default when you right click and hit properties on a shortcut for a prompt like cmd, powershell, or bash, you'll get a dialog that looks like this.

Default Colors in CMD

You'll see there's 16 colors, usually 8 colors on the left, and then the "light/intense/bold" version of each color on the right. I usually used Intense Terminal Green on black before Solarized.

Those values (the defaults) are stored in the registry here HKEY_CURRENT_USER\Console

Where default colors are stored in the Registry

Those defaults are used for NEW shortcuts or consoles that start afresh, via Windows+R. This won't change existing shortcuts you may already have created. There's a few ways to fix this.

I've found the easiest manual way is to recreate the shortcuts. You can do this by just copy-pasting a shortcut and using the new one.

However, there is talk of programmatically updating .lnk (Start Menu link files) with PowerShell. I've started that work here and I'll PR the main repo if I can solve one issue - I can't get it to switch to Solarized Light, just Dark. It might be something wrong on my side.

You'd just go to the location of each LNK file you want to change, then run Update-Link.ps1 YOURLINK.LNK "light|dark" and it'll load up the .lnk file using Windows APIs and save it with a new Color Table.

Here I went to where the Start Menu stores most of the LNK files. You can also search for an item in your start and right-click "Open File Location."pow

Programatically Update your LNKs with PowerShell

Here's before and after with my Developer Command Prompt for Visual Studio 2015.

Solarized!

NOTE: Once this is done, in cmd.exe you can also switch between light and dark with "color f6" or "color 01" which is nice for presentations. I'm not sure how to do this yet in PowerShell or Bash.

Here is the palette after:

Solarized Palette

For PowerShell there is also an extra-step you'll want to put into your Microsoft.PowerShell_profile.ps1 where you map things like Errors, Progress Bars, and Warnings internally in PowerShell. Be sure to read the instructions.

Solarized in Visual Studio and Visual Studio Code

As for Visual Studio and Visual Studio Code, they're far easier. You can just Ctrl-K then Ctrl-T in VSCode and pick Solarized.

Solarized in VS Code

For Visual Studio (all versions) you can head over to @leddt's GitHub and download settings files for Solarized that you can then import info VS from Tools | Import and Export Settings.


Sponsor: Big thanks to Raygun! Don't rely on your users to report the problems they experience. Automatically detect, diagnose and understand the root cause of errors, crashes and performance issues in your web and mobile apps. Learn more.


© 2017 Scott Hanselman. All rights reserved.
     

Solved: Surface Pro 3 USB Driver Issues with the Surface Diagnostic Toolkit

$
0
0

I've got a personal Surface Pro 3 that I like very much. It's worked great for years and I haven't had any issues with it. However, yesterday while installing a 3rd party USB device something got goofed around with the drivers and I ended up in this state.

Universal Serial Bus (USB) Controller banged out in Device Manager

That "banged out" device in my Device Manager is the root Universal Serial Bus (USB) Controller for the Surface. That means everything  USB didn't work since everything USB hangs off that root device node. I know it's an Intel USB 3.0 xHCI Host Controller but I didn't want to go installing random Intel Drivers. I just wanted the Surface back the way it was, working, with the standard drivers.

I tried the usual stuff like Uninstalling the Device and rebooting, hoping Windows would heal it but it didn't work. Because the main USB device was dead that meant my Surface Type Keyboard didn't work, my mouse didn't work, nothing. I had to do everything with the touchscreen.

After a little poking around on Microsoft Support websites, a friend turned me onto the "Surface Tools for IT." These are the tools that IT Departments use when they are rolling out a bunch of Surfaces to an organization and they are regularly updated. In fact, these were updated just yesterday!

Surface Diagnostic Toolkit

There are a number of utilities you can check out but the most useful is the Surface Diagnostic Toolkit. It checks hardware and software versions and found a number of little drivers things wrong...and fixed them. It reset my USB Controller and put in the right driver and I'm back in business.

This util was useful enough to me that I wish it had been installed by default on the Surface and plugged into the built-in Windows Troubleshooting feature.


Sponsor: Seq is simple centralized logging, on your infrastructure, with great support for ASP.NET Core and Serilog. Version 4 adds integrated dashboards and alerts - check it out!



© 2017 Scott Hanselman. All rights reserved.
     

Exploring CQRS within the Brighter .NET open source project

$
0
0

The logo for the "Brighter" Open Source project is a little cannon. Fire and Forget?There's a ton of cool new .NET Core open source projects lately, and I've very much enjoyed exploring this rapidly growing space. Today at lunch I was checking out a project called "Brighter." It's actually been around in the .NET space for many years and is in the process of moving to .NET Core for greater portability and performance.

Brighter is a ".NET Command Dispatcher, with Command Processor features for QoS (like Timeout, Retry, and Circuit Breaker), and support for Task Queues"

Whoa, that's a lot of cool and fancy words. What's it mean? The Brighter project is up on GitHub incudes a bunch of libraries and examples that you can pull in to support CQRS architectural styles in .NET. CQRS stands for Command Query Responsibility Segregation. As Martin Fowler says, "At its heart is the notion that you can use a different model to update information than the model you use to read information." The Query Model reads and the Command Model updates/validates. Greg Young gives the first example of CQRS here. If you are a visual learner, there's a video from late 2015 where Ian Cooper explains a lot of this a the London .NET User Group or an interview with Ian Cooper on Channel 9.

Brighter also supports "Distributed Task Queues" which you can use to improve performance when you're using a query or integrating with microservices.

When building distributed systems, Hello World is NOT the use case. BUT, it is a valid example in that it strips aside any business logic and shows you the basic structure and concepts.

Let's say there's a command you want to send. The GreetingCommand. A command can be any write or "do this" type command.

internal class GreetingCommand : Command

{
public GreetingCommand(string name)
:base(new Guid())
{
Name = name;
}

public string Name { get; private set; }
}

Now let's say that something else will "handle" these commands. This is the DoIt() method. No where do we call Handle() ourselves. Similar to dependency injection, we won't be in the business of calling Handle() ourselves; the underlying framework will abstract that away.

internal class GreetingCommandHandler : RequestHandler<GreetingCommand>

{
[RequestLogging(step: 1, timing: HandlerTiming.Before)]
public override GreetingCommand Handle(GreetingCommand command)
{
Console.WriteLine("Hello {0}", command.Name);
return base.Handle(command);
}
}

We then register a factory that takes types and returns handlers. In a real system you'd use IoC (Inversion of Control) dependency injection for this mapping as well.

Our Main() has a registry that we pass into a larger pipeline where we can set policy for processing commands. This pattern may feel familiar with "Builders" and "Handlers."

private static void Main(string[] args)

{
var registry = new SubscriberRegistry();
registry.Register<GreetingCommand, GreetingCommandHandler>();


var builder = CommandProcessorBuilder.With()
.Handlers(new HandlerConfiguration(
subscriberRegistry: registry,
handlerFactory: new SimpleHandlerFactory()
))
.DefaultPolicy()
.NoTaskQueues()
.RequestContextFactory(new InMemoryRequestContextFactory());

var commandProcessor = builder.Build();

...
}

Once we have a commandProcessor, we can Send commands to it easier and the work will get done. Again, how you ultimately make the commands is up to you.

commandProcessor.Send(new GreetingCommand("HanselCQRS"));

Methods within RequestHandlers can also have other behaviors associated with them, as in the case of "[RequestLogging] on the Handle() method above. You can add other stuff like Validation, Retries, or Circuit Breakers. The idea is that Brighter offers a pipeline of handlers that can all operate on a Command. The Celery Project is a similar project except written in Python. The Brighter project has stated they have lofty goals, intending to one day handle fault tolerance like Netflix's Hystrix project.

One of the nicest aspects to Brighter is that it's prescriptive but not heavy-handed. They say:

Brighter is intended to be a library not a framework, so it is consciously lightweight and divided into packages that allow you to consume only those facilities that you need in your project.

Moving beyond Hello World, there are more fleshed out examples like a TaskList with a UI, back end Http API, a Mailer service, and core library.

Be sure to explore Brighter's excellent documentation and examples, but be aware, this is a project under active development. Perhaps if you're new to OSS, if you find a broken link or two or a misspelling, you can do Your First Pull Request with a small fix?

Do be aware, again, that CQRS is not for every project. It's non-trivial and it's a "mental leap" as Martin Fowler puts it. If you buy in, you're adding complexity...for a reason. Keep your eyes open and do your research. It's a great pattern if you have a high performance/volume application that struggles with write concurrency or a flaky backend.

In fact there are quite a few mature CQRS libraries in the .NET open source space. I'll explore a few - which are your favorites?


Sponsor: Seq is simple centralized logging, on your infrastructure, with great support for ASP.NET Core and Serilog. Version 4 adds integrated dashboards and alerts - check it out!



© 2017 Scott Hanselman. All rights reserved.
     

Speed of dotnet run vs the speed of dotnet for published apps (plus self-contained .NET Core apps)

$
0
0

The .NET Core team really prides themselves on performance. However, it's not immediately obvious (as with all systems) if you just do Hello World as a develop. Just today I was doing a Ruby on Rails app in Development Mode with mruby - but that's not what you'd go to production with.

Let's look at a great question I got today on Twitter.

Dotnet Run - Builds and Runs Source Code in Development

That's a great question. If you install .NET Core 2.0 Preview - this person is on a Mac, but you can use Linux or Windows as well - then do just this:

$ dotnet new console

$ dotnet run

It'll be about 3-4 seconds. dotnet is the SDK and dotnet run will build and run your source code. Here's a short bit from the docs:

The dotnet run command provides a convenient option to run your application from the source code with one command. It's useful for fast iterative development from the command line. The command depends on the dotnet build command to build the code. Any requirements for the build, such as that the project must be restored first, apply to dotnet run as well.

While this is super convenient, it's not totally obvious that dotnet run isn't something you'd go to production with (especially Hello World Production, which is quite demanding! ;) ).

Dotnet Publish then Dotnet YOUR.DLL for Production

Instead, do a dotnet publish, note the compiled DLL created, then run "dotnet tst.dll."

For example:

C:\Users\scott\Desktop\tst> dotnet publish

Microsoft (R) Build Engine version 15.3 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

tst -> C:\Users\scott\Desktop\tst\bin\Debug\netcoreapp2.0\tst.dll
tst -> C:\Users\scott\Desktop\tst\bin\Debug\netcoreapp2.0\publish\
C:\Users\scott\Desktop\tst> dotnet run .\bin\Debug\netcoreapp2.0\tst.dll
Hello World!

On my machine, dotnet run is 2.7s, but dotnet tst.dll is 0.04s.

.NET Core is fast

Dotnet publish --self-contained

I could then publish a complete self-contained app - I'm using Windows, so I'll publish for Windows but you could even build on a Windows machine but target a Mac runtime, etc and that will make a \publish folder.

C:\Users\scott\Desktop\tst> dotnet publish  --self-contained -r win10-x64

Microsoft (R) Build Engine version 15.3 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

tst -> C:\Users\scott\Desktop\tst\bin\Debug\netcoreapp2.0\win10-x64\tst.dll
tst -> C:\Users\scott\Desktop\tst\bin\Debug\netcoreapp2.0\win10-x64\publish\
C:\Users\scott\Desktop\tst> .\bin\Debug\netcoreapp2.0\win10-x64\publish\tst.exe
Hello World!

Note in this case I have a "Self-Contained" app, so all of .NET Core is in that folder and below. Here I run tst.exe, not dotnet.exe because now I'm an end-user.

The results of a published .NET Core App

I hope this helps clear things up.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!



© 2017 Scott Hanselman. All rights reserved.
     

Porting a 15 year old .NET 1.1 Virtual CPU Tiny Operating System school project to .NET Core 2.0

$
0
0

The 2002 TinyOS in C# is now on .NET Core in 2017 running on UbuntuI've had a number of great guests on the podcast lately. One topic that has come up a number of times is the "toy project." I've usually kept mine private - never putting them on GitHub - Somewhat concerned that people would judge me and my code. However, hypocrite that am (aren't we all?) I have advocated that others put their "Garage Sale Code" online. So here's some crappy code. ;)

The Preamble

While I've been working as an engineer for 25 years this year, I didn't graduate from school with a 4 year degree until 2003 - I just needed to get it done, for myself. I was poking around recently and found my project from OIT's CST352 "Operating Systems" class. One of the projects was to create a "Virtual CPU and OS." This is kind of a thought exercise. It's not really a parser/lexer - although there is both - and it's not a real OS. But it needs to be able to take in a made-up quasi-Assembly Language instruction set and execute them on a virtual CPU while managing virtual memory of arbitrary size. Again, a thought exercise made real to confirm that the student understands the responsibilities of a CPU.

Here's an example "application." Confused yet? Here's the original spec I was given in 2002 that includes the 36 instructions the "CPU" should understand. It has 10 general-purpose 32bit registers address as 1 through 10. Register 10 is the stack pointer. There are two bit flag registers - sign flag and zero flag.

Instructions are "opcode arg1 arg2" with constants prefixed with "$."

11 r8        ;Print r8

6 r1 $10 ;Move 10 into r1
6 r2 $6 ;Move 6 into r2
6 r3 $25 ;Move 25 into r3
23 r1 ;Acquire lock in r1 (currently 10)
11 r3 ;Print r3 (currently 25)
24 r1 ;Release r4 (currently 10)
25 r3 ;Sleep r3 (currently 25)
11 r3 ;Print r3 (currently 25)
27 ;Exit

I write my homework assignment in 2002 in the idiomatic C# of the time on .NET 1.1. That means no Generics<T> - I had to make my own strongly typed collections. That means C# has dozens of (if not a hundred) language and syntax improvements. I didn't use a Unit Testing Framework as TDD was just starting around 1999 during the XP (eXtreme Programming) days and NUnit was just getting start. It also uses "unsafe" to pin down memory in a few places. I'm sure there are WAY WAY WAY better and more sophisticated ways to do this today in idiomatic C# of 2017. Those are excuses, the real reasons are my own ignorance, ability, combined with some night-school laziness.

One of the more fun parts of this exercise was moving from physical memory (a byte array as I recall) to a full-on Memory Manager where each Process thought it could address a whole bunch of Virtual Memory while actual Physical Memory was arbitrarily sized. Then - as a joke - I would swap out memory pages as XML! ;) Yes, to be clear, it was a joke and I still love it.

You can run an "app" by passing in the total physical memory along with the text file containing the program, but you can also run an arbitrary number of programs by passing in an arbitrary number  of text files! The "TinyOS" will handle each process thinking it has its own memory and will time

If you are more of a visual learner, perhaps you'd prefer this 20-slide PowerPoint on this Tiny CPU that I presented in Malaysia later that year. You dig those early 2000-era slides? I KNOW YOU DO.

Tiny OS Memory SlidesTiny OS Memory SlidesTiny OS Memory Slides 

Updating a .NET 1.1 app to cross-platform .NET Core 2.0

Step 1 was to download the original code from my own blog. ;) This is also Reason #4134 why you should have a blog.

I decided to use Visual Studio 2017 to upgrade it, and even worse I decided to use .NET Core 2.0 which is currently in Preview. I wanted to use .NET Core 2.0 not just because it's cross-platform but also because it promises to have a pretty large API surface area and I want this to "just work." The part about getting my old application running on Linux is going to be awesome, though.

Visual Studio then pops a scary dialog about upgrading files. NOTE that another totally valid way to do this (that I will end up doing later in this blog post) is to just make a new project and move the source files into it. Natch.

image

Visual Studio says it's targeting .NET 2.0 Full Framework, but I ratchet it up to 4.6 to see what happens. It builds but with a bunch of errors about Obsolete methods, the most interesting one being this one:

Warning CS0618    

'ConfigurationSettings.AppSettings' is obsolete:
'This method is obsolete, it has been replaced by
System.Configuration!System.Configuration.ConfigurationManager.AppSettings'
C:\Users\scott\Downloads\TinyOSOLDOLD\OS Project\CPU.cs 72

That's telling me that my .NET 1/2 API will work but has been replaced in .NET 4.x, but I'm more interested in .NET Core 2.0. I could make my EXE a LIB and target .NET Standard 2.0 or I could make a .NET Core 2.0 app and perhaps get a few more APIs. I didn't do a formal analysis with the .NET Portability Analyzer but I will add that to the list of Things To Do. I may be able to make a library that works on an iPhone - a product that didn't exist when I started this assignment. That would be Just Cool(tm).

I decided to just make a new empty .NET Core 2.0 app and copy the source .cs files into it. A few interesting things.

  • My app also used "unsafe" code (it pins memory down and accesses it directly).
  • It has extensive inline documentation in comments that I used to use NDoc to make a CHM Help file. I'd like that doc to turn into HTML at some point.
  • It also has an appsettings.json file that needs to get copied to the output folder when it compiles.
  • While I could publish it to a self-contained .NET Core exe, for now I'm running it like this in my test batch files - example:
    • dotnet netcoreapp2.0/TinyOSCore.dll 512 scott13.txt

Here's the resulting csproj file.

<Project Sdk="Microsoft.NET.Sdk">


<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.0</TargetFramework>
<GenerateDocumentationFile>true</GenerateDocumentationFile>
</PropertyGroup>

<PropertyGroup>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
</PropertyGroup>

<ItemGroup>
<None Remove="appsettings.json" />
</ItemGroup>

<ItemGroup>
<Content Include="appsettings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
</ItemGroup>

<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Configuration" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="2.0.0-preview2-final" />
</ItemGroup>

</Project>

Other than the obsolete configuration warning and a few malformed XML comments, the app compiled and ran! You can actually "watch" the nightmare process here https://github.com/shanselman/TinyOS/commits/Core2Port in the form of GitHub commits. I also moved the docs from a 2002 Word Doc to Markdown so be sure to explore the fairly extensive spec https://github.com/shanselman/TinyOS.

The only significant change was loading the config. Configuration is even more different on .NET Core 2.0 than Full Framework. It's FAR more, ahem, configurable. I could have used "Options," I could have written my own config provider if it was important to keep the file format.

This little TinyOS has a bunch of config options that come in from a .exe.config file in XML like this (truncated):

<configuration>

<appSettings>
<!--
Must be a factor of 4
This is the total Physical Memory in bytes that the CPU can address.
This should not be confused with the amount of total or addressable memory
that is passed in on the command line.
-->
<add key="PhysicalMemory" value="128" />
<!--
Must be a factor of 4
This is the ammount of memory in bytes each process is allocated
Therefore, if this is 256 and you want to load 4 processes into the OS,
you'll need to pass a number > 1024 as the total ammount of addressable memory
on the command line.
-->
<add key="ProcessMemory" value="384" />
<add key="DumpPhysicalMemory" value="true" />
<add key="DumpInstruction" value="true" />
<add key="DumpRegisters" value="true" />
<add key="DumpProgram" value="true" />
<add key="DumpContextSwitch" value="true" />
<add key="PauseOnExit" value="false" />

I have a few choices. I could make a Configuration Provider and reach .NET Core to read this format (there's an XML adapter, in fact) or make the code porting easier by moving these "name/value" pairs to a JSON file like this:

{

"PhysicalMemory": "128",
"ProcessMemory": "384",
"DumpPhysicalMemory": "true",
"DumpInstruction": "true",
"DumpRegisters": "true",
"DumpProgram": "true",
"DumpContextSwitch": "true",
"PauseOnExit": "false",
"SharedMemoryRegionSize": "16",
"NumOfSharedMemoryRegions": "4",
"MemoryPageSize": "16",
"StackSize": "16",
"DataSize": "16"
}

This was just a few minutes of search and replace to change the XML to JSON. I could have also written a little app or shell script. By changing the config (rather than writing an adapter) I could then keep the code 99% the same.

My code was doing things like this (all over...there was no DI container yet):

bytesOfPhysicalMemory = uint.Parse(ConfigurationSettings.AppSettings["PhysicalMemory"]);

And I'd like to avoid major refactoring - yet. I added this bit of .NET Core configuration at the top of the EntryPoint and saved away an IConfigurationHost:

var builder = new ConfigurationBuilder()

.AddJsonFile("appsettings.json");
Configuration = builder.Build();

I've got a Dictionary in the format of the IConfiguration host called "Configuration." So now I just do this in a dozen places and the app compiles again:

bytesOfPhysicalMemory = uint.Parse(Configuration["PhysicalMemory"]);

This brings up that feeling we all have when we look at old code - especially our own old code. I should have abstracted that away! Why didn't I use an interface? Why so many statics? What was I thinking?

We can beat ourselves up or we can feel good about ourselves and remember this. The app worked. It still works. There is value in it. I learned a lot. I'm a better programmer now. I don't know how far I'll take this old code but I had a lovely afternoon porting it to .NET Core 2.0 and I may refactor the heck out if it or I may not.

TinyOS on Ubuntu

For now I did update the smoke tests to run on both Windows and Linux and I'm happy with the experiment.

Related Links

Have YOU done a project like this, either in school or on your own?


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     

Review: The AmpliFi HD (High-Density) Home Wi-Fi Mesh Networking System

$
0
0

The AmpliFi Router is a cute small white box with a black circular touchscreenI've been very happy with the TP-Link AC3200 Router I got two years ago. It's been an excellent and solid router. However, as the kids get older and the number of mobile devices (and smart(ish) devices) in the house increase, the dead wifi spots have become more and more noticeable. Additionally I've found myself wanting more control over the kids' internet access.

There's a number of great WiFi Survey Apps but I was impressed with the simplicity of this Windows 10 WiFi Survey app, so I used it to measure the signals around my house, superimposed with a picture of the floor plan.

Here's the signal stretch of the TP-Link. Note that when you're using a WiFi Survey app you need to take into consideration if you're measuring 2.4GHz that gives you better distance at slower speeds, or 5GHz that can give you a much faster connection at the cost of range. As a general rule in a single room or small house, 5GHz is better and you'll absolutely notice it with video streaming like Netflix.

Below is a map of the 5GHz single for my single TP-Link router. It's "fine" but it's not epic if you move around. You can guess from the map that the router is under the stairs in the middle.

My older router's wifi map shows mostly Yellow

You can also guess where concrete walls are, as well as the angles of certain vectors that pass through thick walls diagonally and affect the signal. Again, it's OK but it's starting to be annoying and I wanted to see if I could fix it.

SIDE BAR: It is certainly possible to take two routers and combine them into one network with a shared SSID. If you know how to do this kind of thing (and enjoy it) then more power to you. I tried it out in 2010 and it worked OK, but I want my network to "just work" 100% of the time, out of the box. I like the easy setup of a consumer device with minimal moving parts. Mesh Networking products are reaching the consumer at a solid price point with solid tech so I thought it was time to make the switch.

Below is the same map with the same locations, except using the AmpliFi HD (High-Density) Home Wi-Fi System from Ubiquiti Networks. This is the consumer (or "prosumer") version of the technology that Ubiquiti (UBNT) uses in their commercial products.

AmpliFi HD includes the router and two "mesh points." These are extenders that use a mesh tech called 3x3 MIMO. They can transmit and receive via 3 streams at a low level. MIMO is part of the 802.11n spec.

The Singal from the AmpliFi HD is fantastic

Note that this improvement is JUST using the AmpliFi main router. When you do a Wifi Survey the "Mesh Points" will show up as the same SSID (the same wireless network) but they'll have different MAC Address. That means in my list of networks in the Survey tool my "HanselMesh" network appears three times. Don't worry, it's one SSID and your computers will only see ONE network - it's just advanced tools that see each point. It's that "meshing" of n number of access points that is the whole point.

These two maps below are the relative strengths of just the mesh points. It's the union of all three of these maps that gives the clear picture. For example, one mesh point covers the living area fantastically (as does the router itself) while the other covers the garage (not that it needs it) and the entire office.

The mesh points make the signal better in parts of the houseThe mesh points make the signal better in parts of the house

Between the main router and the two included mesh points there are NO dead spots in the house. I'll find the kids in odd corners with an iPad, behind a couch in the play room where they couldn't get signal before. I'm finding myself sitting in different rooms than I did before just because I can roam without thinking about it.

I would suspect I could get away with buying just the AmpliFi Router (around US$133) and maybe one mesh point extender but the price for all three (router + 2 mesh points) is decent. The slick part is that you can add mesh points OR a second router. It's the second router idea that is most compelling for multi-floor buildings that also have a wired network. For example, I could add a second router (not a mesh point) upstairs and plug it into the wall (so it's "wire backed").

The mesh points plug into the wall and just sit there. You can adjust them, bend them to point towards the router, and best of all - move them at will. For example, when I set up the network initially I put the two mesh points where I thought they'd work best. But one didn't and Netflix was dropping. I literally unplugged it and moved it into the hallway and plugged it in. A minute later that whole area was full speed. This means if I did/do find a dead spot, I could just move the mesh point either temporarily or permanently.

The router is adorable. Like "I wish it wasn't in a closet" adorable. It's pretty enough that you'll want it on your desk. It has a great LCD touchscreen and a lighted base. The touchscreen shows your IP, total bandwidth this month (very useful, in fact), and bandwidth currently used.

The router is best set-up with an iPhone/iPad or Android device. There is a VERY minimal web interface but you really can't manage the Amplfi (as of the time of this writing) with a web browser - it really is designed to be administered with a mobile app. And frankly, I'm OK with it because the app is excellent.

The AmpliFi App says "Everything is Great"35Mbs up/down

The download/upload numbers there aren't the maximum speed - it's the bandwidth being used right now. You can test the speed elsewhere in the app. I have 35Mb/s up and down (usually) in my house, but Gigabit inside (which is useful as I have a Synology server internally).

There a lot of ways to restrict internet for the kids. I like that the Amplify lets me group devices and apply time-limits to them. Here the Xbox and two tablets can't use the internet until 9am and they turn off at bedtime.

Notice the pause buttons as well. I can temporarily pause internet on any one device (or group of devices) whenever.

imagePhoto Jun 25, 7 41 23 PM

When you're setting up the network and positioning the mesh points you can see near-realtime signals updates in the app.

100% signal on this Mesh Point72% signal on this Mesh Point

And once it's all done, you can impose a basic QoS (Quality of Service) on individual devices by telling the AmpliFi what they are used for. Here I've setup a device for multi-player gaming, while some iPads are used mostly for streaming.

Setting up Streaming in AmpliFiNew Updates are available

Setup is a snap. It took longer to go to each device and connect them to the new network than it did to set up the network. I suppose I could have kept the same SSID and password as the old network but I wanted a fresh start and easier A/B testing.

So far I have been 100% thrilled with the AmpliFi HD. It's important to point out again that AmpliFi is the consumer arm of Ubiquiti (UBNT) and that a dozen programmer/techie-types on Twitter insisted suggested that I needed these Enterprise/Commercial Access Points. I get it. They are more advanced, fancier, offer more stats and more control. But honestly, my house isn't that big, the data I'm pushing around isn't that complex, and I don't want a Commercial Level of control. I was (and am) thoroughly impressed with the consumer stuff. The app is excellent and improving. The coverage is complete and fast. The AmpliFi is rated at 450 Mbps for 2.4 GHz and 1.3 Gbps for 5 GHz). Even if I upgrade my internet to my localities max of 150 Mbps (I only pay for 35 Mbps today) I'm not anywhere near that limit externally, and I'm not doing anything close internally.

That said, here's some things I'd like in future updates:

  • Simpler port-forwarding with common rules. "This xbox/that service"
  • An open source VPN server. I'd like to VPN directly into the Ubiquiti, rather than into my Synology.
  • More quality of service/prioritization details. "The office server always has preferred packets, period"
  • Mobile alerts - I'd like to know if I go over x bandwidth, or if we are streaming at x Mbs for y hours.
  • A fully featured administration web console.

And yes, I realize NOW I should have called the Network "Hanselmesh." Missed opportunity.

I highly recommend the AmpliFi HD. I frankly have no complaints other than my small wish list above. Buy one via my Amazon referral links so I can keep blogging in my spare time AND buy tacos. Your use of these links gives me walking around money. Thanks for reading!


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     

URLs are UI

$
0
0

imageWhat a great title. "URLs are UI." Pithy, clear, crisp. Very true. I've been saying it for years. Someone on Twitter said "this is the professional quote of 2017" because they agreed with it.

Except Jakob Nielsen said it in 1999. And Tim Berners-Lee said "Cools URIs don't change" in 1998.

So many folks spend time on their CSS and their UX/UI but still come up with URLs that are at best, comically long, and at worst, user hostile.

Search Results that aren't GETs - Make it easy to share

Even non-technical parent or partner things URLs are UI? How do I know? How many times has a relative emailed you something like this:

"Check out this house we found!
https://www.somerealestatesite.com/
homes/for_sale/
search_results.asp"

That's not meant to tease non-technical relative! It's not their fault! The URL is the UI for them. It's totally reasonable for them to copy-paste from the box that represents where they are and give it to you so you can go there too!

Make it a priority that your website supports shareable URLs.

URLs that are easy to shorten - Can you easily shorten a URL?

I love Stack Overflow's URLs. Here's an example: https://stackoverflow.com/users/6380/scott-hanselman 

The only thing that matters there is the 6380. Try it https://stackoverflow.com/users/6380 or https://stackoverflow.com/users/6380/fancy-pants also works. SO will even support this! http://stackoverflow.com/u/6380.

Genius. Why? Because they decided it matters.

Here's another https://stackoverflow.com/questions/701030/whats-the-significance-of-oct-12-1999 again, the text after the ID doesn't matter. https://stackoverflow.com/questions/701030/

This is a great model for URLs where you want a to use a unique ID but the text/title in the URL may change. I use this for my podcasts so https://hanselminutes.com/587/brandon-bouier-on-the-defense-digital-service-and-deploying-code-in-a-war-zone is the same as https://hanselminutes.com/587.

Unnecessarily long or unintuitive URLs - Human Readable and Human Guessable

Sometimes if you want context to be carried in the URL you have to, well, carry it along. There was a little debate  on Twitter recently about URLs like this https://fabrikam.visualstudio.com/_projects. What's wrong with it? The _ is not intuitive at all. Why not https://fabrikam.visualstudio.com/projects? Because obscure technical reason. In fact, all the top level menu items for doing stuff in VSTS start with _. Not /menu/ or /action or whatever. My code is https://fabrikam.visualstudio.com/_git/FabrikamVSO and I clone from here https://fabrikam.visualstudio.com/DefaultCollection/_git/FabrikamVSO. That's weird. Where did Default Connection come from? Why can't I just add a ".git" extension to my project's URL and clone that? Well, maybe they want the paths to be nice in the URL.

Nope. https://fabrikam.visualstudio.com/_git/FabrikamVSO?path=%2Fsrc%2Fsetup%2Fcleanup.local.ps1&version=GBmaster&_a=contents is a file. Compare that to https://github.com/shanselman/TinyOS/blob/master/readme.md at GitHub. Again, I am sure there is a good, and perhaps very valid technical reason. But another valid reason is very frank. URLs weren't a UX priority.

Same with OneDrive https://onedrive.live.com/?id=CD0633A7367371152C%21172&cid=CD06A73371152C vs. DropBox https://www.dropbox.com/home/Games

As a programmer, I am sympathetic. As a user, I have zero sympathy. Now I have to remember that there is a _ and it's a thing.

I proposed this. URLs are rarely a tech problem They are an organizational willpower problem. You care a lot about the evocative 2meg jpg hero image on your website. You change fonts, move CSS around ad infinitum, and agonize over single pixels. You should also care about your URLs.

SIDE NOTE: Yes, I am fully aware of my own hypocrisy with this issue. My blog software was written by a bunch of us in 2002 and our URLs are close to OK, but their age is showing. I need to find a balance between "Cool URLs don't change" and "should I change totally uncool URLs." Ideally I'd change my blog's URLs to be all lowercase, use hyphens for spaces instead of CamelCase, and I'd hide the technology. No need (other than 17 year old historical technical ones) to have .aspx or .php at the end of your URL. It's on my list.

What is your advice, Dear Reader for good URLs?


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     

Ubuntu now in the Windows Store: Updates to Linux on Windows 10 and Important Tips

$
0
0

I noticed this blog post about Ubuntu over at the Microsoft Command Line blog. Ubuntu is now available from the Windows Store for builds of Windows over 16215.

image

You can run "Winver" to see your build number of Windows. If you run Windows 10 you can certainly sign up for the Windows Insiders builds, or you can wait a few months until these features make their way to the mainstream. I've been running Windows 10 Insiders "Fast ring" for a while with a few issues but nothing blocking.

The addition of Ubuntu to the Windows Store may initially seem confusing or even a little bizarre. However, given a minute to understand the larger architecture it make a lot of sense. However, for those of us who have been beta-testing these features, the move to the Windows Store will require some manual steps in order for you to reap the benefits.

Here's how I see it.

  • For the early betas of the Windows Subsystem for Linux you type bash from anywhere and it runs Ubuntu on Windows.
  • Ubuntu on Windows hides its filesystem in C:\Users\scott\AppData\Local\somethingetcetc and you shouldn't go there or touch it.
  • By moving the tar files and Linux distro installation into the store, that allows us users to use the Store's CDN (Content Distrubution Network) to get Distros quickly and easily. 
    • Just turn on the feature and REBOOT
      Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

then hit the store to get the binaries!

Ok, now this is where and why it gets interesting.

Soon (later this month I'm told) we will be able to have n number of native Linux distros on our Windows 10 machines at one time. You can install as many as you like from the store. No VMs, just fast Linux...on Windows!

There is a utility for the Windows Subsystem for Linux called "wslconfig" that Windows 10 has.

C:\>wslconfig

Performs administrative operations on Windows Subsystem for Linux

Usage:
/l, /list [/all] - Lists registered distributions.
/all - Optionally list all distributions, including distributions that
are currently being installed or uninstalled.
/s, /setdefault <DistributionName> - Sets the specified distribution as the default.
/u, /unregister <DistributionName> - Unregisters a distribution.

C:\WINDOWS\system32>wslconfig /l
Windows Subsystem for Linux Distributions:
Ubuntu (Default) Fedora
OpenSUSE

At this point when I type "bash" at the regular Windows command prompt or PowerShell I will be launching my default Linux. I can also just type "Ubuntu" or "Fedora," etc to get a specific one.

If I wanted to test my Linux code (.NET, node, go, ruby, whatever) I could script it from Windows and run my tests on n number of distros. Slick for developers.

TODOs if you have WSL and Bash from earlier betas

If you already have "bash" on your Windows 10 machine and want to move to the "many distros" you'll just install the Ubuntu distro from the store and then move your distro customizations out of the "legacy/beta bash" over to the "new train but beta although getting closer to release WSL." I copied my ~/ folder over to /mnt/c/Users/Scott/Desktop/WSLBackup, then opened Ubuntu and copied my .rc files and whatnot back in. Then I removed my original bash with lxrun /uninstall. Once I've done that, my distro are managed by the store and I can have as many as I like. Other than customizations, it's really easy (like, it's not a big deal and it's fast) to add or remove Linuxes on Windows 10 so fear not. Backup your stuff and this will be a 10 min operation, plus whatever apt-get installs you need to redo. Everything else is the same and you'll still want to continue storing and sharing files via /mnt/c.

NOTE: I did a YouTube video called Editing code and files on Windows Subsystem for Linux on Windows 10 that I'd love if you checked out and shared on social media!

Enjoy!


Sponsor: Seq is simple centralized logging, on your infrastructure, with great support for ASP.NET Core and Serilog. Version 4 adds integrated dashboards and alerts - check it out!



© 2017 Scott Hanselman. All rights reserved.
     

13 hours debugging a segmentation fault in .NET Core on Raspberry Pi and the solution was...

$
0
0

Debugging is a satisfying and special kind of hell. You really have to live it to understand it. When you're deep into it you never know when it'll be done. When you do finally escape it's almost always a DOH! moment.

I spent an entire day debugging an issue and the solution ended up being a checkbox.

NOTE: If you get a third of the way through this blog post and already figured it out, well, poop on you. Where were you after lunch WHEN I NEEDED YOU?

I wanted to use a Raspberry Pi in a tech talk I'm doing tomorrow at a conference. I was going to show .NET Core 2.0 and ASP.NET running on a Raspberry Pi so I figured I'd start with Hello World. How hard could it be?

You'll write and build a .NET app on Windows or Mac, then publish it to the Raspberry Pi. I'm using a preview build of the .NET Core 2.0 command line and SDK (CLI) I got from here.

C:\raspberrypi> dotnet new console

C:\raspberrypi> dotnet run
Hello World!
C:\raspberrypi> dotnet publish -r linux-arm
Microsoft Build Engine version for .NET Core

raspberrypi1 -> C:\raspberrypi\bin\Debug\netcoreapp2.0\linux-arm\raspberrypi.dll
raspberrypi1 -> C:\raspberrypi\bin\Debug\netcoreapp2.0\linux-arm\publish\

Notice the simplified publish. You'll get a folder for linux-arm in this example, but could also publish osx-x64, etc. You'll want to take the files from the publish folder (not the folder above it) and move them to the Raspberry Pi. This is a self-contained application that targets ARM on Linux so after the prerequisites that's all you need.

I grabbed a mini-SD card, headed over to https://www.raspberrypi.org/downloads/ and downloaded the latest Raspbian image. I used etcher.io - a lovely image burner for Windows, Mac, or Linux - and wrote the image to the SD Card. I booted up and got ready to install some prereqs. I'm only 15 min in at this point. Setting up a Raspberry Pi 2 or Raspberry Pi 3 is VERY smooth these days.

Here's the prereqs for .NET Core 2 on Ubuntu or Debian/Raspbian. Install them from the terminal, natch.

sudo apt-get install libc6 libcurl3 libgcc1 libgssapi-krb5-2 libicu-dev liblttng-ust0 libssl-dev libstdc++6 libunwind8 libuuid1 zlib1g

I also added an FTP server and ran vncserver, so I'd have a few ways to talk to the Raspberry Pi. Yes, I could also SSH in but I have a spare monitor, and with that monitor plus VNC I didn't see a need.

sudo apt-get pure-ftpd

vncserver

Then I fire up Filezilla - my preferred FTP client - and FTP the publish output folder from my dotnet publish above. I put the files in a folder off my ~\Desktop.

FTPing files

Then from a terminal I

pi@raspberrypi:~/Desktop/helloworld $ chmod +x raspberrypi

(or whatever the name of your published "exe" is. It'll be the name of your source folder/project with no extension. As this is a self-contained published app, again, all the .NET Core runtime stuff is in the same folder with the app.

pi@raspberrypi:~/Desktop/helloworld $ ./raspberrypi 

Segmentation fault

The crash was instant...not a pause and a crash, but it showed up as soon as I pressed enter. Shoot.

I ran "strace ./raspberrypi" and got this output. I figured maybe I missed one of the prerequisite libraries, and I just needed to see which one and apt-get it. I can see the ld.so.nohwcap error, but that's a historical Debian-ism and more of a warning than a fatal.

strace on a bad exe in Linux

I used to be able to read straces 20 years ago but much like my Spanish, my skills are only good at Chipotle. I can see it just getting started loading libraries, seeking around in them, checking file status,  mapping files to memory, setting memory protection, then it all falls apart. Perhaps we tried to do something inappropriate with some memory that just got protected? We are dereferencing a null pointer.

Maybe you can read this and you already know what is going to happen! I did not.

I run it under gdb:

pi@raspberrypi:~/Desktop/WTFISTHISCRAP $ gdb ./raspberrypi 

GNU gdb (Raspbian 7.7.1+dfsg-5+rpi1) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
This GDB was configured as "arm-linux-gnueabihf".
"/home/pi/Desktop/helloworldWRONG/./raspberrypi1": not in executable format: File truncated
(gdb)

Ok, sick files?

I called Peter Marcu from the .NET team and we chatted about how he got it working and compared notes.

I was using a Raspberry Pi 2, he a Pi 3. Ok, I'll try a 3. 30 minutes later, new SD card, new burn, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

Weird.

Maybe corruption? Here's a thread about Corrupted Files on Raspbian Jesse 2017-07-05! That's the version I have. OK, I'll try the build of Raspbian from a week before.

30 minutes later, burn another SD card, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

BUT IT WORKS ON PETER'S MACHINE.

Weird.

Maybe a bad nuget.config? No.

Bad daily .NET build? No.

BUT IT WORKS ON PETER'S MACHINE.

Ok, I'll try Ubuntu Mate for Raspberry Pi. TOTALLY different OS.

30 minutes later, burn another SD card, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

What's the common thread here? Ok, I'll try from another Windows machine.

SAME RESULT - segfault.

I call Peter back and we figure it's gotta be prereqs...but the strace doesn't show we're even trying to load any interesting libraries. We fail FAST.

Ok, let's get serious.

We both have Raspberry Pi 3s. Check.

What kind of SD card does he have? Sandisk? Ok,  I'll use Sandisk. But disk corruption makes no sense at that level...because the OS booted!

What did he burn with? He used Win32diskimager and I used Etcher. Fine, I'll bite.

30 minutes later, burn another SD card, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

He sends me HIS build of a HelloWorld and I FTP it over to the Pi. SAME RESULT - segfault.

Peter is freaking out. I'm deeply unhappy and considering quitting my job. My kids are going to sleep because it's late.

I ask him what he's FTPing with, and he says WinSCP. I use FileZilla, ok, I'll try WinSCP.

WinSCP's New Session dialog starts here:

SFTP is Default

I say, WAIT. Are you using SFTP or FTP? Peter says he's using SFTP so I turn on SSH on the Raspberry Pi and SFTP into it with WinSCP and copy over my Hello World.

IT FREAKING WORKS. IMMEDIATELY.

Hello World on a Raspberry Pi

BUT WHY.

I make a folder called Good and a folder called BAD. I copy with FileZilla to BAD and with WinSCP to GOOD. Then I run a compare. Maybe some part of .NET Core got corrupted? Maybe a supporting native library?

pi@raspberrypi:~/Desktop $ diff --brief -r helloworld/ helloworldWRONG/

Files helloworld/raspberrypi1 and helloworldWRONG/raspberrypi1 differ

Wait, WHAT? The executable are different? One is 67,684 bytes and the bad one is 69,632 bytes.

Time for a  visual compare.

All the ODs are gone

At this point I saw it IMMEDIATELY.

0D is CR (13) and 0A is LF (10). I know this because I'm old and I've written printer drivers for printers that had both carriages and lines to feed. Why do YOU know this? Likely because you've transferred files between Unix and Windows once or thrice, perhaps with FTP or Git.

All the CRs are gone. From my binary file.

Why?

I went straight to settings in FileZilla:

Treat files without extensions as ASCII files

See it?

Treat files without extensions as ASCII files

That's the default in FileZilla. To change files that are just chilling, minding their own business, as ASCII, and then just randomly strip out carriage returns. What could go wrong? And it doesn't even look for CR LF pairs! No, it just looks for CRs and strips them. Classy.

In retrospect I should have used known this, but it wasn't even the switch to SFTP, it was the switch to an FTP program with different defaults.

This bug/issue whatever burned my whole Monday. But, it'll never burn another Monday, Dear Reader, because I've seen it before now.

FAIL FAST FAIL OFTEN my friends!

Why does experience matter? It means I've failed a lot in the past and it's super useful if I remember those bugs because then next time this happens it'll only burn a few minutes rather than a day.

Go forth and fail a lot, my loves.

Oh, and FTP sucks.


Sponsor: Thanks to Redgate! A third of teams don’t version control their database. Connect your database to your version control system with SQL Source Control and find out who made changes, what they did, and why. Learn more



© 2017 Scott Hanselman. All rights reserved.
     

Monospaced Programming Fonts with Ligatures

$
0
0

Animation of how ligature fonts change as you typeTypographic ligatures are when multiple characters appear to combine into a single character. Simplistically, when you type two or more characters and they magically attach to each other, you're using ligatures that were supported by your OS, your app, and your font.

I did a blog post in 2011 on using OpenType Ligatures and Stylistic Sets to make nice looking wedding invitations. Most English laypeople aren't familiar with ligatures as such and are impressed by them! However, if your language uses ligatures as a fundamental building block, this kind of stuff is old hat. Ligatures are fundamental to Arabic script and when you're typing it up you'll see your characters/font change and ligatures be added as you type. For example here is ل ا with a space between them, but this is لا the same two characters with no space. Ligatures kicked in.

OK, let's talk programming. Picking a programming font is like picking a religion. No matter what you pick someone will say you're wrong. Most people will agree at least that monospaced fonts are ideal for reading code and that both of you who use proportionally spaces fonts are destined for hell, or at the very least, purgatory.

Beyond that, there's some really interesting programming fonts that have ligature support built in. It's important that you - as programmers - understand and remember that ligatures are just a view on the bytes that are your code. If you custom make a font that makes the = equals site a poop emoji, that's between you and your font. The same thing applies to ligatures. Your code is the same.

Three of the most interesting and thoughtful monospaced programming fonts with ligatures are Fira Code, Monoid, and Hasklig. I say "thoughtful" but that's what I really mean - these folks have designed these fonts with programming in mind, considering spacing, feel, density, pleasantness, glance-ability, and a dozen other things that I'm not clever enough to think of.

I'll be doing screenshots (and coding) in the free cross-platform Visual Studio Code. Go to your User Settings (Ctrl-,) or File | Preferences, and add your font name and turn on ligatures if you want to follow along. Example:

// Place your settings in this file to overwrite the default settings
{
    "editor.fontSize": 20,
    "editor.fontLigatures": true,
    "editor.fontFamily": "Fira Code"
}

Most of these fonts have dozens and dozens of ligature combinations and there is no agreement for "make this a single glyph" or "use ligatures for -> but not ==> so you'll need to try them out with YOUR code and make a decision for yourself. My sample code example can't be complete and how it looks and feels to you on your screen is all that matters.

Here's my little sample. Note the differences.

// FIRA CODE

object o;
if (o is int i || (o is string s &&
int.TryParse(s, out i)) { /* use i */ }
var x = 0xABCDEF;
-> --> ==> != === !== && ||<=<
</><tag> http://www.hanselman.com
<=><!-- HTML Comment -->
i++; #### ***

Fira Code

Fira Code

There's so much here. Look at how "www" turned into an interesting glyph. Things like != and ==> turn into arrows. HTML Comments are awesome. Double ampersands join together.

I was especially impressed by the redefined hex "x". See how it's higher up and smaller than var x?

Monoid

Monoid

Monoid prides itself on being crisp and readable on retina displays as well as at 9pt on low-res displays. I frankly can't understand how tiny font people can function. It gives me a headache to even consider programming at anything less than 14 to 16pt and I am usually around 20pt. And my vision is fine. ;)

image

Monoid's goal is to be sleek and precise and the designer has gone out of their way to make sure there's no confusion between any two characters.

Hasklig

Hasklig takes the Source Code Pro font and adds ligatures. As you can tell by the name, it's great in Haskell, as for a while a number of Haskell people were taking to using single character (tiny) Unicode glyphs like ⇒ for things like =>. Clearly this was a problem best solved by ligatures.

Hasklig

Do any of you use programming fonts with ligatures? I'm impressed with Fira Code, myself, and I'm giving it a try this month.


Sponsor: Thanks to Redgate! A third of teams don’t version control their database. Connect your database to your version control system with SQL Source Control and find out who made changes, what they did, and why. Learn more


© 2017 Scott Hanselman. All rights reserved.
     
Viewing all 1148 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>