Quantcast
Channel: Scott Hanselman's Blog
Viewing all 1148 articles
Browse latest View live

Scaling Mentorship

$
0
0
IMG_0170
 

You may have had a mentor in the past. Often these are more senior/elder people who are further along in their career. The presumption usually is that if they are "ahead" of you that they likely have something profound to offer you in the areas of advice or strategy.

This is a classic mentor/mentee situation and while I think it has value, it has a few problems that are worth pointing out. Does it scale? Is a senior person the right mentor for you? Is just one mentor the right number? Does that person's time support mentoring you?

I've been blessed to have several mentors over the years and I've been fortunate to be a mentor myself. But there's only so much time in the day. Even if I could truly mentor 4 people a week, and meet with them a few times a month, that could fill up many days. Plus, I have to ask my self - am I giving them what they need? Personal advice? Career advice? Technical advice? Getting promoted advice? Life advice?

Create a Board of Directors for Your Life

I've been experimenting with a few other models for mentorship. Five years ago I set up a Board of Directors for my life. You can learn more at http://lifesboardofdirectors.com.

Companies have mission statements and a Board of Directors. Your life is pretty important. Why not create a Life Board of Directors to help you through it? Pick 2 to 5 of your friends. Not necessarily your closest friends, but friends that are close enough where you can really confide but not so close that they can't see the big picture. Email them one a month, once a quarter or "once a crisis." Ask them for advice, lean on them, trust them and help them as well.

Assemble "Team You" and use your team to brainstorm directions and implementations of big decisions like moving to New York, or changing your business's direction, starting a new venture, or getting fit.

Use your personal Board of Directors as one of the compasses in your life. You've got family, friends, perhaps faith, hobbies, values, etc. Add your Team to this list of personal compasses.

It might sound like a silly mind game, but that's common with many hacks. Hacks feel insignificant but can have huge effects. The trick is to remember that it is a hack - you're hacking yourself. The idea of life's board of directors is a relationship hack meant to remind you in difficult times that you can agree on something fundamental and you have a team to support you in your endeavors. Set a direction and head in that direction with the confidence you've got a supportive group behind you.

Go assemble your Life's Board today.

Host Mentorship Meals

Over the last several months I've been quietly hosting "Dinner for people on the come up." These are dinners where everything is FrieNDA and we talk frankly about our jobs, our levels, our work situations, and most importantly - we find new mentors and people with whom to brainstorm. It's a mentorship multiplier. We encourage folks to pull from the pool of potential peer mentors.

Tonight we had one with almost 20 people. These were 20 mostly young people, many women and people of color who were all trying to find their way in tech. I have some life experiences to offer this group, but most of all what I can lend is my privilege. I can use my standing within the company and the industry to invite folks together and let them take over and mentor each other.

I host the mean, kick it off, sometimes invite guests to speak, and the attendees often break off into small groups, meet up separately and network. Peer mentorship is just as important as "elder/senior" mentorship.

It also helps mentor people in the the fullness of their personalities. Where I might help with speaking at conferences or technical issues, someone else can better speak to issues of harassment, or how to get a promotion, or how to be better seen and heard in meetings. I can also learn from younger people - and I do - every day.

The goal of mentorship isn't to lecture and preach, it's to guide and counsel, inspire and motivate. Most of all, to listen. Once you've truly heard your mentee, then you can help them think strategically and better plan their career, no matter what their challenges and strengths.

What do you recommend as positive ways to Scale Mentorship?

* Stock photo from The Jopwell Collection


Sponsor: Get the latest JetBrains Rider preview for .NET Core 2.0 support, Value Tracking and Call Tracking, MSTest runner, new code inspections and refactorings, and the Parallel Stacks view in debugger.


© 2017 Scott Hanselman. All rights reserved.
     

First Impressions - Jibo Social Robot for the Home

$
0
0

Jibo moves VERY organicallyAs you likely know, I have a BUNCH of robots in the house. Whether it be turning a tin can into a robot, driving a Raspberry Pi around with Windows IoT, building robot arms with my kids, or controlling a robot with Xamarin code, I'm ALL IN when it comes to home robots. I also have Alexa, Cortana, Siri...but they have no bodies. They are just disembodied voices - why not a social robot with a body AND a personality?

Jibo is the first social robot for the home, and when their team emailed me to try Jibo out - and soon explore their SDK and build more skills into Jibo - I jumped at the idea. Jibo started as an Indiegogo campaign in 2014 and now I've got a pre-public version that I'm stoked to explore and expand.

Jibo showed up in a surprisingly hefty box. He's about 8 pounds and about a foot tall. You turn him on and he starts his initial set up process. Since Jibo has a voice and touch screen, it's pretty straightforward to hook up to WiFi and download whatever updates are needed. After this initial process, updates happen overnight and I haven't noticed them, other than to see that Jibo has new skills in the morning. He's basically maintenance-free.

imageThe first time you set up Jibo and he moves I expect you'll be a little shocked - I was. His movements are extremely fluid and organic. I struggled finding the right words to explain how his movements feel, so I made an animated gif you can see at the right. His body turns, his head moves, he has a little waist and neck. All these joints combined with the color touch screen and his voice give him quite the personality. It's clear within just a few minutes that to dismiss Jibo as a "Alexa with a body" would be a mistake.

The 9 year old and 11 year old have already started going to Jibo in the morning and asking him how his day was, and seeing if he has new skills. I believe the "bonding" - for lack of another world - is connected to the physicality and personality of Jibo.

I realize this photo looks somewhat staged, but it's not. I snuck up on my 9 year old telling Jibo about his day at school and asking him homework questions. Jibo didn't know a number of things, but it was interesting to see how kids are extremely patient with robots, speaking to them as if they're even smaller kids.

The 9 year old says this:

If you are trying to get something to keep track of your meetings or the news you maybe would buy Alexa. But if you have a kid who loves robots you want Jibo. Jibo is fun, if you make noise Jibo will look at you. He can move his big head to look at you and if you tap his eye he will give you a list of things to do. Another new thing is that he now has a list of cool thing you can ask or tell, like one is "Hey Jibo, Are there any monsters in my house" then he will bring up a radar and look around and Jibo will say no, there's no monsters. We also have an Alexa but if your looking for some thing fun we go straight to Jibo he can tell jokes and also favorite part is when Jibo dances.

Since he wrote this, Jibo woke up with the ability to tell me the news, so I can only imagine he'll continue to get Alexa-like skills that will balance the "boring work stuff" my son says I want with the "games and homework help" that he wants.

He recognizes your face, your family's faces (if you train him and opt-in), uses your names, follows your face, and can tell where you are in the room when you talk to him. He's got 6 microphones that let him understand where you and he are in physical space.

I'm imagining the kinds of skills Jibo might potentially get in the future - or that I might write for him - like (and I'm totally brainstorming here):

  • Tell stories before bedtime
  • Watch cartoons
  • Give Khan Academy exercises as Homework
  • Play music
  • Trivia and/or board games
  • Wikipedia stuff
  • Maps
  • Tell me about my blood sugar, show a diabetes chart, wake me up if I go low.
  • Play Tea Time or play along as kids make up stories
  • Vlogging or daily diary keeping

What are your thoughts, Dear Reader? What would you want Jibo to know or do for you?

Disclaimer: The folks at Jibo sent me a pre-public Jibo for free to explore his SDK. However, my words and opinions are my own. I'll post my honest impressions here and there, on my blog and on Twitter as Jibo grows and learns more things.


Sponsor: GdPicture.NET is an all-in-one SDK for WinForms, WPF, and Web development. It supports 100+ formats, including PDF and Office Open XML. Create powerful document imaging, image processing, and document management apps!



© 2017 Scott Hanselman. All rights reserved.
     

Use a second laptop as an extended monitor with Windows 10 wireless displays

$
0
0

James Clarke from the Windows team rolled into a meeting today with two Surfaces...but one had no keyboard. Then, without any ceremony, he proceeded to do this:

Holy Crap a Surface as a Second Monitor

Now, I consider myself a bit of a Windows Productivity Tips Gourmand, and while I was aware of Miracast and the general idea of a Wireless Display, I didn't realize that it worked this well and that it was built into Windows 10.

In fact, I'm literally sitting here in a hotel with a separate USB3 LCD display panel to use as a second monitor. I've also used Duet Display and used my iPad Pro as a second monitor.

I usually travel with a main laptop and a backup laptop anyway. Why do I lug this extra LCD around? Madness. I had this functionality all the time, built in.

Use your second laptop as a second monitor

On the machine you want to use as a second monitor, head over to Settings | System | Projecting to this PC and set it up as you like, considering convenience vs. security.

Settings | Projecting to this PC

Then, from your main machine - the one you are projecting from - just hit Windows Key+P, like you were projecting to a projector or second display. At the bottom, hit Connect to a Wireless Display.

Connect to a Wireless Display

Then wait a bit as it scans around for your PC. You can extend or duplicate...just like another monitor...

Connected to a Wireless Display

...because Windows thinks it IS another monitor.

You can also do this with Miracast TVs like my LG, or your Roku or sometimes Amazon Fires, or you can get a Microsoft Wireless Display Adapter and HDMI to any monitor - even ones at hotels!

NOTE: It's not super fast. It's sometimes pixelly and sometimes slow, depending on what's going on around you. But I just moved Chrome over onto my other machine and watched a YouTube video, just fine. I wouldn't play a game on it, but browsing, dev, typing, coding, works just fine!

Get ready for this. You can ALSO use the second machine as a second collaboration point! That means that someone else could PAIR with you and also type and move their mouse. THIS makes pair programming VERY interesting.

 Allow input from the remote display

Here's a video of it in action:

Give it a try and let me know how it goes. I used two Surfaces, but I also have extended my display to a 3 year old Lenovo without issues.


Sponsor: GdPicture.NET is an all-in-one SDK for WinForms, WPF, and Web development. It supports 100+ formats, including PDF and Office Open XML. Create powerful document imaging, image processing, and document management apps!



© 2017 Scott Hanselman. All rights reserved.
     

Recovering from the Windows 10 Insiders Fast 17017 volsnap.sys reboot GSOD/BSOD

$
0
0

NOTE: I'm not involved with the Windows Team or the Windows Insider Program. This blog is my own and written as a user of Windows. I have no inside information. I will happily correct this blog post if it's incorrect. Remember, don't just do stuff to your computer because you read it on a random blog. Think first, backup always, then do stuff.

Beta testing is always risky. The Windows Insiders Program lets you run regular early builds of Windows 10. There's multiple "rings" like Slow and Fast - depending on your risk tolerance, and bandwidth. I run Fast and maybe twice a year there's something bad-ish that happens like a bad video driver or a app that doesn't work, but it's usually fixed within a week. It's the price I pay for happily testing new stuff. There's the Slow ring which is more stable and updates like once a month vs once a week. That ring is more "baked."

This last week, as I understand it, a nasty bug made it out to Fast for some number of people (not everyone but enough that it sucked) myself included.

I don't reboot my Surface Book much, maybe twice a month, but I did yesterday while preparing for the DevIntersection conference and suddenly my main machine was stuck in a "Repairing Windows" reboot loop. It wouldn't start, wouldn't repair. I was FREAKING out. Other people I've seen report a Green Screen of Death (GSOD/BSOD) loop with an error in volsnap.sys.

TO FIX IT

The goal is to get rid of the bad volsnap from Windows 10 Insiders build version 17017 and replace that one file with a non-broken version from a previous build. That's your goal. There's a few ways to do this, so you need to put some thought into how you want to do it.

NOTE: At the time of this writing, Fast Build 17025 is rolling out and fixes this, so if you can take that build you're cool, and no worries. Do it.

volsnap.sys was a problem with 17017

1. Can you boot Windows 10 off something else? USB/DVD?

Can you boot off something else like another version Windows 10 USB key or a DVD? Boot off your recovery media as if you're re-installing Windows 10 BUT DO NOT CLICK INSTALL.

When you've run Windows 10 Setup, instead click Repair, then Troubleshoot, then Command Prompt. It's especially important to get to the Command Prompt this way rather than pressing Shift-10 as you enter setup, because this path will allow you to unlock your possibly BitLockered C: drive.

NOTE: If your boot drive is bitlockered you'll need to go to https://onedrive.live.com/RecoveryKey on another machine or your phone and find your computer's Recovery Key. You'll enter this as you press Troubleshoot and it will allow you to access your now-unencrypted drive from the command prompt.

At this point all your drive letters may be weird. Take a moment and look around. Your USB key may be X: or Z:. Your C: drive may be D: or E:.

2. Do you have an earlier version of volsnap.sys? Find it.

If you've been taking Windows Insiders Builds/Flights, you may have a C:\Windows.old folder. Remembering to be conscious of your drive letters, you want to rename the bad volsnap and copy in the old one from elsewhere. In this example, I get it from C:\Windows.old.

ren C:\windows\system32\drivers\volsnap.sys C:\windows\system32\drivers\volsnap.sys.bak

copy C:\windows.old\windows\system32\drivers\volsnap.sys C:\windows\system32\drivers\volsnap.sys

Unfortunately, *I* didn't have a C:\windows.old folder as I used Disk Cleanup to get more space. I found a good volsnap.sys from another machine in my house and copied it to the root of the USB key I booted off up. In that case my copy command was different as I copied from my USB key to c:\windows\system32\drivers, but the GOAL was the same - get a good volsnap.sys.

Once I resolved my boot issue, I went to Windows Update and am now updating to 17025.

PLEASE, friends - BACK UP YOUR STUFF. Remember the Backup Rule of Three.

Here's the rule of three. It's a long time computer-person rule of thumb that you can apply to your life now. It's also called the Backup 3-2-1 rule.

  • 3 copies of anything you care about - Two isn't enough if it's important.
  • 2 different formats - Example: Dropbox+DVDs or Hard Drive+Memory Stick or CD+Crash Plan, or more
  • 1 off-site backup - If the house burns down, how will you get your memories back?

Beta testing will cost you some time, and system crashes happen. But are they a nightmare data loss scenario or are they an irritant. For me this was a scary "can't boot" scenario, but I had another machine and my stuff was backed up.

Don't take beta builds of anything on your primary machine that you care about and that makes you money.

DISCLAIMER: I love you but this blog post has NO warranty. I have no idea what I'm doing and if this makes your non-bootable beta software machine even worse, that's on you, Dear Reader.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     

How to Build a Kubernetes Cluster with ARM Raspberry Pi then run .NET Core on OpenFaas

$
0
0

6 Raspberry Pi Kubernetes Cluster with Fabulous Batman on top

First, why would you do this? Why not. It's awesome. It's a learning experience. It's cheaper to get 6 pis than six "real computers." It's somewhat portable. While you can certainly quickly and easily build a Kubernetes Cluster in the cloud within your browser using a Cloud Shell, there's something more visceral about learning it this way, IMHO. Additionally, it's a non-trivial little bit of power you've got here. This is also a great little development cluster for experimenting. I'm very happy with the result.

By the end of this blog post you'll have not just Hello World but you'll have Cloud Native Distributed Containerized RESTful microservice based on ARMv7 w/ k8s Hello World! as a service. (original Tweet). ;)

Not familiar with why Kubernetes is cool? Check out Julia Evans' blog and read her K8s posts and you'll be convinced!

Hardware List (scroll down for Software)

Here's your shopping list. You may have a bunch of this stuff already. I had the Raspberry Pis and SD Cards already.

  • 6 - Raspberry Pi 3 - I picked 6, but you should have at least 3 or 4.
    • One Boss/Master and n workers. I did 6 because it's perfect for the power supply, perfect for the 8-port hub, AND it's a big but not unruly number.
  • 6 - Samsung 32Gb Micro SDHC cards - Don't be too cheap.
    • Faster SD cards are better.
  • 2x6 - 1ft flat Ethernet cables - Flat is the key here.
    • They are WAY more flexible. If you try to do this with regular 1ft cables you'll find them inflexible and frustrating. Get extras.
  • 1 - Anker PowerPort 6 Port USB Charging Hub - Regardless of this entire blog post, this product is amazing.
    • It's almost the same physical size as a Raspberry Pi, so it fits perfect at the bottom of your stack. It puts out 2.4a per port AND (wait for it) it includes SIX 1ft Micro USB cables...perfect for running 6 Raspberry Pis with a single power adapter.
  • 1 - 7 layer Raspberry Pi Clear Case Enclosure - I only used 6 of these, which is cool.
    • I love this case, and it looks fantastic.
  • 1 - Black Box USB-Powered 8-Port Switch - This is another amazing and AFAIK unique product.
    • An overarching goal for this little stack is that it be easy to move around and set up but also to power. We have power to spare, so I'd like to avoid a bunch of "wall warts" or power adapters. This is an 8 port switch that can be powered over a Raspberry Pi's USB. Because I'm given up to 2.4A to each micro USB, I just plugged this hub into one of the Pis and it worked no problem. It's also...wait for it...the size of a Pi. It also include magnets for mounting.
  • 1 - Some Small Router - This one is a little tricky and somewhat optional.
    • You can just put these Pis on your own Wifi and access them that way, but you need to think about how they get their IP address. Who doles out IPs via DHCP? Static Leases? Static IPs completely?
    • The root question is - How portable do you want this stack to be? I propose you give them their own address space and their own router that you then use to bridge to other places. Easiest way is with another router (you likely have one lying around, as I did. Could be any router...and remember hub/switch != router.
    • Here is a bad network diagram that makes the point, I think. The idea is that I should be able to go to a hotel or another place and just plug the little router into whatever external internet is available and the cluster will just work. Again, not needed unless portability matters to you as it does to me.
    • You could ALSO possibly get this to work with a Travel Router but then the external internet it consumed would be just Wifi and your other clients would get on your network subnet via Wifi as well. I wanted the relative predictability of wired.
    • What I WISH existed was a small router - similar to that little 8 port hub - that was powered off USB and had an internal and external Ethernet port. This ZyXEL Travel Router is very close...hm...
  • Optional - Pelican Case if you want portability. I'll see what airport security thinks. O_O
  • Optional - Tiny Keyboard and Mouse - Raspberry Pis can put out about 500mA per port for mice and keyboards. The number one problem I see with Pis is not giving them enough power and/or then having an external device take too much and then destabilize the system. This little keyboard is also a touchpad mouse and can be used to debug your Pi when you can't get remote access to it. You'll also want an HMDI cable occasionally.
  • You're Rich - If you have money to burn, get the 7" Touchscreen Display and a Case for it, just to show off htop in color on one of the Pis.

Dodgey Network Diagram

Network Diagram showing that the Pi Stack has its own Router

Disclaimer

OK, first things first, a few disclaimers.

The software in this space is moving fast. There's a non-zero chance that some of this software will have a new version out before I finish this blog post. In fact, when I was setting up Kubernetes, I created a few nodes, went to bed for 6 hours, came back and made a few more nodes and a new version had come out. Try to keep track, keep notes, and be aware of what works with what.

Kubernetes 1.8.1

Next, I'm just learning this stuff. I may get some of this wrong. While I've built (very) large distributed systems before, my experience with large orchestrators (primarily in banks) was with large proprietary ones in Java, C++, COM, and later in C#, .NET 1.x,2.0, and WCF. It's been really fascinating to see how Kubernetes thinks about these things and comparing it to how we thought about these things in the 90s and very early 2000s. A lot of best practices that were HUGE challenges many years ago are now being codified and soon, I hope, will "just work" for a new generation of developer. At least another full page of my resume is being marked [Obsolete] and I'm here for it. Things change and they are getting better.

Software

Get your Raspberry PIs and SD cards together. Also bookmark and subscribe to Alex Ellis' blog as you're going to find yourself there a lot. He's the author of OpenFaas, which I'll be using today and he's done a LOT of work making this experiment possible. So thank you Alex for being awesome! He has a great post on how Multi-stage Docker files make it possible to effectively use .NET Core on a Raspberry Pi while still building on your main machine. He and I spent a few late nights going around and around to make this easy.

Alex has put together a Gist we iterated on and I'll summarize here. You'll do these instructions n times for all machines.

You'll do special stuff for the ONE master/boss node and different stuff for the some number of worker nodes.

ADVANCED TIP! If you know what you're doing Linux-wise, you should save this excellent prep.sh shell script that Alex made, then SKIP to the node-specific instructions below. If you want to learn more, do it step by step.

ALL NODES

  • Burn Jessie to a SD Card
  • Creating an empty file called "ssh" before you put the card in the Raspberry Pi
  • SSH into the new Pi
    • I'm on Windows so I used WSL (Ubuntu) for Windows that lets me SSH and do run Linux natively.
    • ssh pi@raspberrypi
      • Login pi, password raspberry.
  • Change the Hostname

I ran

rasbpi-config

then immediately reboot with "sudo reboot"

  • Install Docker
curl -sSL get.docker.com | sh && \ sudo usermod pi -aG docker
  • Disable Swap. Important, you'll get errors in Kuberenetes otherwise

sudo dphys-swapfile swapoff && \ sudo dphys-swapfile uninstall && \ sudo update-rc.d dphys-swapfile remove
  • Go edit /boot/cmdline.txt with your favorite editor, or use
    sudo nano /boot/cmdline
    and add this at the very end. Don't press enter.
    cgroup_enable=cpuset cgroup_enable=memory
  • Install Kubernetes
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \ sudo apt-get update -q && \ sudo apt-get install -qy kubeadm 

MASTER/BOSS NODE

After ssh'ing into my main node, I used /ifconfig eth0 to figure out what the IP adresss was. Ideally you want this to be static (not changing) or at least a static lease. I logged into my router and set it as a static lease, so my main node ended up being 192.168.170.2, and .1 is the router itself.

Then I initialized this main node

sudo kubeadm init --apiserver-advertise-address=192.168.170.2

This took a WHILE. Like 10-15 min, so be patient.

Kubernetes uses this admin.conf for a ton of stuff, so you're going to want a copy in your $HOME folder so you can call "kubectl" easily later, copy it and take ownership.

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

When this is done, you'll get a nice print out with a ton of info and a token you have to save. Save it all. I took a screenshot.

The results of kubectl init

WORKER NODES

Ssh into your worker nodes and join them each to the main node. This line is the line you needed to have saved above when you did a kubectl init.

kubeadm join --token d758dc.059e9693bfa5 192.168.170.2:6443 --discovery-token-ca-cert-hash sha256:c66cb9deebfc58800a4afbedf0e70b93c086d02426f6175a716ee2f4d

Did it work?

While ssh'ed into the main node - or from any networked machine that has the admin.conf on it - try a few commands.

Here I'm trying "kubectl get nodes" and "kubectl get pods."

image

Note that I already have some stuff installed, so you'll want try "kubectl get pods --namespace kube-system" to see stuff running. If everything is "Running" then you can finish setting up networking. Kubernetes has fifty-eleven choices for networking and I'm not qualified to pick one. I tried Flannel and gave up and then tried Weave and it just worked. YMMV. Again, double check Alex's Gist if this changes.

kubectl apply -f https://git.io/weave-kube-1.6

At this point you should be ready to run some code!

Hello World...with Markdown

Back to Alex's gist, I'll try this "markdownrender" app. It will take some Markdown and return HTML.

Go get the function.yml from here and create the new app on your new cluster.

$ kubectl create -f function.yml

$ curl -4 http://localhost:31118 -d "# test"
<p><h1>test</h1></p>

This part can be tricky - it was for me. You need to understand what you're doing here. How do we know the ports? A few ways. First, it's listed as nodePort in the function.yml that represents the desired state of the application.

We can also run "kubectl get svc" and see the ports for various services.

pi@hanselboss1:~ $ kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager NodePort 10.103.43.130 <none> 9093:31113/TCP 1d
dotnet-ping ClusterIP 10.100.153.185 <none> 8080/TCP 1d
faas-netesd NodePort 10.103.9.25 <none> 8080:31111/TCP 2d
gateway NodePort 10.111.130.61 <none> 8080:31112/TCP 2d
http-ping ClusterIP 10.102.150.8 <none> 8080/TCP 1d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
markdownrender NodePort 10.104.121.82 <none> 8080:31118/TCP 1d
nodeinfo ClusterIP 10.106.2.233 <none> 8080/TCP 1d
prometheus NodePort 10.98.110.232 <none> 9090:31119/TCP 2d

See those ports that are outside:insider? You can get to markdownrender directly from 31118 on an internal IP like localhost, or the main/master IP. Those 10.x.x.x are all software networking, you can not worry about them. See?

pi@hanselboss1:~ $ curl -4 http://192.168.170.2:31118 -d "# test"

<h1>test</h1>

pi@hanselboss1:~ $ curl -4 http://10.104.121.82:31118 -d "# test"
curl: (7) Failed to connect to 10.104.121.82 port 31118: Network is unreachable

Can we access this cluster from another machine? My Windows laptop, perhaps?

Access your Raspberry Pi Kubernetes Cluster from your Windows Machine (or elsewhere)

I put KubeCtl on my local Windows machine put it in the PATH.

  • I copied the admin.conf over from my Raspberry Pi. You will likely use scp or WinSCP.
  • I made a little local batch file like this. I may end up with multiple clusters and I want it easy to switch between them.
    • SET KUBECONFIG="C:\users\scott\desktop\k8s for pi\admin.conf

Once you have Kubectl on another machine that isn't your Pi, try running "kubectl proxy" and see if you can hit your cluster like this. Remember you'll get weird "Connection refused" if kubectl thinks you're talking to a local cluster.

image

Here you can get to localhost:8001/api and move around, then you've successfully punched a hole over to your cluster (proxied) and you can treat localhost:8001 as your cluster. So "kubectl proxy" made that possible.

If you have WSL (Windows Subsystem for Linux) - and you should - then you could also do this and TUNNEL to the API. But I'm going to get cert errors and generally get frustrated. However, tunneling like this to other apps from Windows or elsewhere IS super useful. What about the Kubernetes Dashboard?

~ $ sudo ssh -L 8001:10.96.0.1:443 pi@192.168.170.2

I'm going to install the Kubernetes Dashboard like this:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard-arm.yaml

Pay close attention to that URL! There are several sites out there that may point to older URLs, non ARM dashboard, or use shortened URLs. Make sure you're applying the ARM dashboard. I looked here https://github.com/kubernetes/dashboard/tree/master/src/deploy.

Notice I'm using the "alternative" dashboard. That's for development and I'm saying I don't care at all about security when accessing it. Be aware.

I can see where my Dashboard is running, the port and the IP address.

pi@hanselboss1:~ $ kubectl get svc --namespace kube-system

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 2d
kubernetes-dashboard ClusterIP 10.98.2.15 <none> 80/TCP 2d

NOW I can punch a hole with that nice ssh tunnel...

~ $ sudo ssh -L 8080:10.98.2.15:80 pi@192.168.170.2

I can access the Kubernetes Dashboard now from my Windows machine at http://localhost:8080 and hit Skip to login.

Kubernetes Dashboard

Do note the Namespace dropdown and think about what you're viewing. There's the kube-system stuff that manages the cluster

Adding OpenFaas and calling a serverless function

Let's go to the next level. We'll install OpenFaas - think Azure Functions or Amazon Lambda, except for your own Docker and Kubernetes cluster. To be clear, OpenFaas is an Application that we will run on Kubernetes, and it will make it easier to run other apps. Then we'll run other stuff on it...just some simple apps like Hello World in Python and .NET Core. OpenFaas is one of several open source "Serverless" solutions.

Do you need to use OpenFaas? No. But if your goal is to write a DoIt() function and put it on your little cluster easily and scale it out, it's pretty fabulous.

Remember my definition of Serverless...there ARE servers, you just don't think about them.

Serverless Computing is like this - Your code, a slider bar, and your credit card.

Let's go.

.NET Core on OpenFaas on Kubernetes on Raspberry Pi

I ssh'ed into my main/master cluster Pi and set up OpenFaas:

git clone https://github.com/alexellis/faas-netes && cd faas-netes 


kubectl apply -f faas.armhf.yml,rbac.yml,monitoring.armhf.yml

Once OpenFaas is installed on your cluster, here's Alex's great instructions on how to setup your first OpenFaas Python function, so give that a try first and test it. Once we've installed that Python function, we can also hit http://192.168.170.2:31112/ui/ (where that's your main Boss/Master's IP) and see it the OpenFaas UI.

OpenFaas and the "faas-netes" we setup above automates the build and deployment of our apps as Docker Images to Kuberetes. It makes the "Developer's Inner Loop" simpler. I'm going to make my .NET app, build, deploy, then change, build, deploy and I want it to "just work" on my cluster. And later, and I want it to scale.

OpenFaas Portal

I'm doing .NET Core, and since there is a runtime for .NET Core for Raspberry Pi (and ARM system) but no SDK, I need to do the build on my Windows machine and deploy from there.

Quick Aside: There are docker images for ARM/Raspberry PI for running .NET Core. However, you can't build .NET Core apps (yet?) directly ON the ARM machine. You have to build them on an x86/x64 machine and then get them over to the ARM machine. That can be SCP/FTPing them, or it can be making a docker container and then pushing that new docker image up to a container registry, then telling Kubernetes about that image. K8s (cool abbv) will then bring that ARM image down and run it. The technical trick that Alex and I noticed was of course that since you're building the Docker image on your x86/x64 machine, you can't RUN any stuff on it. You can build the image but you can't run stuff within it. It's an unfortunate limitation for now until there's a .NET Core SDK on ARM.

What's required on my development machine (not my Raspberry Pis?

Here's the gist we came up with, again thanks Alex! I'm going to do it from Windows.

I'll use the faas-cli to make a new function with charp. I'm calling mine dotnet-ping.

faas-cli new --lang csharp dotnet-ping

I'll edit the FunctionHandler.cs to add a little more. I'd like to know the machine name so I can see the scaling happen when it does.

using System;

using System.Text;

namespace Function
{
public class FunctionHandler
{
public void Handle(string input) {
Console.WriteLine("Hi your input was: "+ input + " on " + System.Environment.MachineName);
}
}
}

Check out the .yml file for your new OpenFaas function. Note the gateway IP should be your main Pi, and the port is 31112 which is OpenFaas.

I also changed the image to include "shanselman/" which is my Docker Hub. You could also use a local Container Registry if you like.

provider:

name: faas
gateway: http://192.168.170.2:31112

functions:
dotnet-ping:
lang: csharp
handler: ./dotnet-ping
image: shanselman/dotnet-ping

Head over to the ./template/csharp/Dockerfile and we're going to change it. Ordinarily it's fine if you are publishing from x64 to x64 but since we are doing a little dance, we are going to build and publish the .NET apps as linux-arm from our x64 machine, THEN push it, we'll use a multi stage docker file. Change the default Docker file to this:

FROM microsoft/dotnet:2.0-sdk as builder


ENV DOTNET_CLI_TELEMETRY_OPTOUT 1

# Optimize for Docker builder caching by adding projects first.

RUN mkdir -p /root/src/function
WORKDIR /root/src/function
COPY ./function/Function.csproj .

WORKDIR /root/src/
COPY ./root.csproj .
RUN dotnet restore ./root.csproj

COPY . .

RUN dotnet publish -c release -o published -r linux-arm

ADD https://github.com/openfaas/faas/releases/download/0.6.1/fwatchdog-armhf /usr/bin/fwatchdog
RUN chmod +x /usr/bin/fwatchdog

FROM microsoft/dotnet:2.0.0-runtime-stretch-arm32v7

WORKDIR /root/
COPY --from=builder /root/src/published .
COPY --from=builder /usr/bin/fwatchdog /

ENV fprocess="dotnet ./root.dll"
EXPOSE 8080
CMD ["/fwatchdog"]

Notice a few things. All the RUN commands are above the second FROM where we take the results of the first container and use its output to build the second ARM-based one. We can't RUN stuff because we aren't on ARM, right?

We use the Faas-Cli to build the app, build the docker container, AND publish the result to Kubernetes.

faas-cli build -f dotnet-ping.yml --parallel=1

faas-cli push -f dotnet-ping.yml
faas-cli deploy -f dotnet-ping.yml --gateway http://192.168.170.2:31112

And here is the dotnet-ping command running on the pi, as seen within the Kubernetes Dashboard.

I can then scale them out like this:

kubectl scale deploy/dotnet-ping --replicas=6
.NET on Raspberry Pi on Kubernetes 

And if I hit it multiple times - either via curl or via the dashboard, I see it's hitting different pods:

OpenFaas scales .NET apps

If I want to get super fancy, I can install Grafana - a dashboard manager by running locally in my machine on port 3000

docker run -p 3000:3000 -d grafana/grafana

Then I can add OpenFaas a datasource by pointing Grafana to http://192.168.170.2/31119 which is where the Prometheus metrics app is already running, then import the OpenFaas dashboard from the grafana.json file that is in the I cloned it from.

Grafana

Super cool. I'm going to keep using this little Raspberry Pi Kubernetes Cluster to learn as I get ready to do real K8s in Azure! Thanks to Alex Ellis for his kindness and patience and to Jessie Frazelle for making me love both Windows AND Linux!

* If you like this blog, please do use my Amazon links as they help pay for projects like this! They don't make me rich, but a few dollars here and there can pay for Raspberry Pis!

Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2017 Scott Hanselman. All rights reserved.
     

Optimizing ASP.NET Core Docker Image sizes

$
0
0

ASP.NET Core on KubernetesThere is a great post from Steve Laster in 2016 about optimizing ASP.NET Docker Image sizes. Since then Docker has added multi-stage build files so you can do more in one Dockerfile...which feels like one step even though it's not. Containers are about easy and reliable deployment, and they're also about density. You want to use as little memory as possible, sure, but it also is nice to make them as small as possible so you're not spending time moving them around the network. The size of the image file can also affect startup time for the container. Plus it's just tidy.

I've been building a little 6 node Raspberry Pi (ARM) Kubenetes Cluster on my desk - like you do - this week, and I noticed that my image sizes were a little larger than I'd like. This is a bigger issue because it's a relatively low-powered system, but again, why carry around x unnecessary megabytes if you don't have to?

Alex Ellis has a great blog on building .NET Core apps for Raspberry Pi along with a YouTube video. In his video and blog he builds a "Console.WriteLine()" console app, which is great for OpenFaas (open source serverless platform) but I wanted to also have ASP.NET Core apps on my Raspberry Pi k8s cluster. He included this as a "challenge" in his blog, so challenge accepted! Thanks for all your help and support, Alex!

ASP.NET Core on Docker (on ARM)

First I make a basic ASP.NET Core app. I could do a Web API, but this time I'll do an MVC one with Razor Pages. To be clear, they are the same thing just with different starting points. I can always add pages or add JSON to either, later.

I start with "dotnet new mvc" (or dotnet new razor, etc). I'm going to be running this in Docker, managed by Kuberenetes, and while I can always change the WebHost in Program.cs to change how the Kestrel web server starts up like this:

WebHost.CreateDefaultBuilder(args)

.UseUrls(http://*:5000;http://localhost:5001;https://hostname:5002)

For Docker use cases it's easier to change the listening URL with an Environment Variable. Sure, it could be 80, but I like 5000. I'll set the ASPNETCORE_URLS environment variable to http://+:5000 when I make the Dockerfile.

Optimized MultiStage Dockerfile for ASP.NET

There's a number of "right" ways to do this, so you'll want to think about your scenarios. You'll see below that I'm using ARM (because Raspberry Pi) so if you see errors running your container like "qemu: Unsupported syscall: 345" then you're trying to run an ARM image on x86/x64. I'm going to be building an ARM container from Windows but I can't run it here. I have to push it to a container registry and then tell my Raspberry Pi cluster to pull it down and THEN it'll run, over there.

Here's what I have so far. NOTE there are some things commented out, so be conscious. This is/was a learning exercise for me. Don't you copy/paste unless you know what's up! And if there's a mistake, here's a GitHub Gist of my Dockerfile for you to change and improve.

It's important to understand that .NET Core has an SDK with build tools and development kits and compilers and stuff, and then it has a runtime. The runtime doesn't have the "make an app" stuff, it only has the "run an app stuff." There is not currently an SDK for ARM so that's a limitation that we are (somewhat elegantly) working around with the multistage build file. But, even if there WAS an SDK for ARM, we'd still want to use a Dockerfile like this because it's more efficient with space and makes a smaller image.

Let's break this down. There are two stages. The first FROM is the SDK image that builds the code. We're doing the build inside Docker - which is lovely, and  great reliable way to do builds.

PRO TIP: Docker is smart about making intermediate images and doing the least work, but it's useful if we (the authors) do the right thing as well to help it out.

For example, see where we COPY the .csproj over and then do a "dotnet restore"? Often you'll see folks do a "COPY . ." and then do a restore. That doesn't allow Docker to detect what's changed and you'll end up paying for the restore on EVERY BUILD.

By making this two steps - copy the project, restore, copy the code, this means your "dotnet restore" intermediate step will be cached by Docker and things will be WAY faster.

After you build, you'll do a publish. If you know the destination like I do (linux-arm) you can do a RID (runtime id) publish that is self-contained with -r linux-arm (or debian, or whatever) and you'll get a complete self-contained version of your app.

Otherwise, you can just publish your app's code and use a .NET Core runtime image to run it. Since I'm using a complete self-contained build for this image, it would be overkill to ALSO include the .NET runtime. If you look at the Docker hub for Microsoft/dotnet You'll see images called "deps" for "dependencies." Those are images that sit on top of debian that include the things .NET needs to run - but not .NET itself.

The stack of images looks generally like this (for example)

  • FROM debian:stretch
  • FROM microsoft/dotnet:2.0-runtime-deps
  • FROM microsoft/dotnet:2.0-runtime

So you have your base image, your dependencies, and your .NET runtime. The SDK image would include even more stuff since it needs to build code. Again, that's why we use that for the "as builder" image and then copy out the results of the compile and put them in another runtime image. You get the best of all worlds.

FROM microsoft/dotnet:2.0-sdk as builder  


RUN mkdir -p /root/src/app/aspnetcoreapp
WORKDIR /root/src/app/aspnetcoreapp

#copy just the project file over
# this prevents additional extraneous restores
# and allows us to re-use the intermediate layer
# This only happens again if we change the csproj.
# This means WAY faster builds!
COPY aspnetcoreapp.csproj .
#Because we have a custom nuget.config, copy it in
COPY nuget.config .
RUN dotnet restore ./aspnetcoreapp.csproj

COPY . .
RUN dotnet publish -c release -o published -r linux-arm

#Smaller - Best for apps with self-contained .NETs, as it doesn't include the runtime
# It has the *dependencies* to run .NET Apps. The .NET runtime image sits on this
FROM microsoft/dotnet:2.0.0-runtime-deps-stretch-arm32v7

#Bigger - Best for apps .NETs that aren't self-contained.
#FROM microsoft/dotnet:2.0.0-runtime-stretch-arm32v7

# These are the non-ARM images.
#FROM microsoft/dotnet:2.0.0-runtime-deps
#FROM microsoft/dotnet:2.0.0-runtime

WORKDIR /root/
COPY --from=builder /root/src/app/aspnetcoreapp/published .
ENV ASPNETCORE_URLS=http://+:5000
EXPOSE 5000/tcp
# This runs your app with the dotnet exe included with the runtime or SDK
#CMD ["dotnet", "./aspnetcoreapp.dll"]
# This runs your self-contained .NET Core app. You built with -r to get this
CMD ["./aspnetcoreapp"]

Notice also that I have a custom nuget.config, so if you do also you'll need to make sure that's available at build time for dotnet restore to pick up all packages.

I've included by commented out a bunch of the FROMs in the second stage. I'm using just the ARM one, but I wanted you to see the others.

Once we have the code we build copied into our runtime image, we set our environment variable so our all listens on port 5000 internally (remember that from above?) Then we run our app. Notice that you can run it with "dotnet foo.dll" if you have the runtime, but if you are like me and using a self-contained build, then you'll just run "foo."

To sum up:

  • Build with FROM microsoft/dotnet:2.0-sdk as builder
  • Copy the results out to a runtime
  • Use the right runtime FROM for you
    • Right CPU architecture?
    • Using the .NET Runtime (typical) or using a self-contained build (less so)
  • Listening on the right port (if a web app)?
  • Running your app successfully and correctly?

Optimizing a little more

There are a few pre-release "Tree Trimming" tools that can look at your app and remove code and binaries that you are not calling. I included Microsoft.Packaging.Tools.Trimming as well to try it out and get even more unused code out of my final image by just adding a package to my project.

Step 8/14 : RUN dotnet publish -c release -o published -r linux-arm /p:LinkDuringPublish=true

---> Running in 39404479945f
Microsoft (R) Build Engine version 15.4.8.50001 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Trimmed 152 out of 347 files for a savings of 20.54 MB
Final app size is 33.56 MB
aspnetcoreapp -> /root/src/app/aspnetcoreapp/bin/release/netcoreapp2.0/linux-arm/aspnetcoreapp.dll
Trimmed 152 out of 347 files for a savings of 20.54 MB
Final app size is 33.56 MB

If you run docker history on your final image you can see exactly where the size comes from. If/when Microsoft switches from a Debian base image to an Alpine one, this should get even smaller.

C:\Users\scott\Desktop\k8s for pi\aspnetcoreapp>docker history c60

IMAGE CREATED CREATED BY SIZE COMMENT
c6094ca46c3b 3 minutes ago /bin/sh -c #(nop) CMD ["dotnet" "./aspnet... 0B
b7dfcf137587 3 minutes ago /bin/sh -c #(nop) EXPOSE 5000/tcp 0B
a5ba51b91d9d 3 minutes ago /bin/sh -c #(nop) ENV ASPNETCORE_URLS=htt... 0B
8742269735bc 3 minutes ago /bin/sh -c #(nop) COPY dir:cc64bd3b9bacaeb... 56.5MB
28c008e38973 3 minutes ago /bin/sh -c #(nop) WORKDIR /root/ 0B
4bafd6e2811a 4 hours ago /bin/sh -c apt-get update && apt-get i... 45.4MB
<missing> 3 weeks ago /bin/sh -c #(nop) CMD ["bash"] 0B
<missing> 3 weeks ago /bin/sh -c #(nop) ADD file:8b7cf813a113aa2... 85.7MB

Here is the evolution of my Dockerfile as I made changes and the final result got smaller and smaller. Looks like 45 megs trimmed with a little work or about 20% smaller.

C:\Users\scott\Desktop\k8s for pi\aspnetcoreapp>docker images | find /i "aspnetcoreapp"

shanselman/aspnetcoreapp 0.5 c6094ca46c3b About a minute ago 188MB
shanselman/aspnetcoreapp 0.4 083bfbdc4e01 12 minutes ago 196MB
shanselman/aspnetcoreapp 0.3 fa053b4ee2b4 About an hour ago 199MB
shanselman/aspnetcoreapp 0.2 ba73f14e29aa 4 hours ago 207MB
shanselman/aspnetcoreapp 0.1 cac2f0e3826c 3 hours ago 233MB

Later I'll do a blog post where I put this standard ASP.NET Core web app into Kubernetes using this YAML description and scale it out on the Raspberry Pi. I'm learning a lot! Thanks to Alex Ellis and Glenn Condron and Jessie Frazelle for their time!


Sponsor: Create powerful Web applications to manage each step of a document’s life cycle with DocuVieware HTML5 Viewer and Document Management Kit. Check our demos to acquire, scan, edit, annotate 100+ formats, and customize your UI!



© 2017 Scott Hanselman. All rights reserved.
     

The perfect Nintendo Switch travel set up and recommended accessories

$
0
0

I've had a Nintendo Switch since launch day and let me tell you, it's joyful. Joyous. It's a little joy device. I love 4k Xboxen and raw power as much as the next Jane or Joe Gamer, but the Switch just keeps pumping out happy games. Indie games, Metroidvania games like Axiom Verge, Legend of Zelda: Breath of the Wild (worth the cost of the system) and now, super Mario Odyssey. Even Doom and Wolfenstein 2 are coming to the Switch soon!

I've travelled already with my Switch all over. Here's what I've come up with for my travels - and my at-home Switch Experience. I owe and use these items personally - and I vouch for their awesomeness and utility.

BlueTooth Adapter

51hXseaPR-L._SL1000_

This TaoTronics BlueTooth adapter fixes the most obvious problem with the Switch - no blueooth headset support. If there is ever a Switch 1.5 release, you can bet they'll add Bluetooth. This device is great for a few reasons. It's small, it has its own rechargeable battery, it charges with micro USB, and it supports both transmit and receive. That's an added bonus in that it lets you turn any speakers with a 1/8" headphone jack into a BT speaker. Again, tiny and fits in my Switch case. I pair my Airpods with this device by putting the Airpods into pairing mode by putting the case button, then holding down the pairing button on this adapter, which promiscuously pairs. Works great.

Switch Travellers Case

91rXJJEHSyL._SL1500_

I have a Zelda version of this case. It's very roomy and I can fit a 3rd party stand, a dozen cartridges, BT adapter, headphones, screen wipes, and more inside. There's a number of options and styles past the link, including character cases.

Switch Joy-Con Gel Covers

61V4rW65TlL._SL1000_

These gel-covers - or ones like them - are essential. The Switch Joy-Cons are great for children's hands, but for normal/larger-sized people they are lacking something. It's not the cover, it's the extra depth these gel covers give you. I can't use the Switch without them.

HORI Compact Playstand

61Ly9jIq-cL._AC_

This is an airplane must. I want to use my Pro Controller one a plane - or at least detached Joy-Cons - so ideally I want to have the Switch stand on its own. The Switch does have its own kickstand, but honestly, it's flimsy. Works when the world isn't moving, but the angle is wrong and it tips over easily on a plane. This playstand folds flat, fits in the case above, and is very adjustable. It also works great to hold your phone or small tablet for watching movies, so it ends up playing double duty. Plus, it's $12.

Switch Grip Kit

gripkit

This one is optional UNLESS you have little kids and Mario Kart. When you're using Switch Joy-Cons as individual controllers, again, they are small. These turn them into tiny Xbox-style controllers. They are plastic holsters, but the kids love them.

HDMI Type C USB Hub Adapter for Switch

hdmiadapter

This can replace your not-portable Switch Dock. I didn't believe it would work but it's great. I can also fit this tiny Dongle in my Switch Case, and along with an HDMI cable and existing Switch power adapter I can plug the Switch into any hotel TV with HDMI. It's an amazing thing to be able to game in a hotel on a long business trip with minimal stuff to carry.

BASSTOP Portable Switch Dock

31-q00P0AWL

Another docking option that requires some assembly and disassembly on your part is this Portable Dock. It's not the dock, it's just the plastic shell. You'll need to take apart your existing giant dock and discover it's all air. The internals of the official dock then fit inside this one.

What are YOUR must have Switch Accessories? And more important, WHY HAVE YOU NO BUY SWITCH?

* My blog often uses Amazon affiliate links. I use that money for tacos and switch games. Please click on them and support my blog!


Sponsor: Create powerful Web applications to manage each step of a document’s life cycle with DocuVieware HTML5 Viewer and Document Management Kit. Check our demos to acquire, scan, edit, annotate 100+ formats, and customize your UI!



© 2017 Scott Hanselman. All rights reserved.
     

WebOptimizer - a Bundler and Minifier for ASP.NET Core

$
0
0

ASP.NET Core didn't have a runtime bundler like previous versions of ASP.NET. This was a bummer as I was a fan. Fortunately Mads Kristensen created one and put it on GitHub, called WebOptimizer.

WebOptimizer - ASP.NET Core middleware for bundling and minification of CSS and JavaScript files at runtime. With full server-side and client-side caching to ensure high performance.

I'll try it out on a default ASP.NET Core 2.0 app.

First, assuming I've installed http://dot.net I'll run

C:\Users\scott\Desktop> cd squishyweb


C:\Users\scott\Desktop\squishyweb> dotnet new mvc
The template "ASP.NET Core Web App (Model-View-Controller)" was created successfully.
This template contains technologies from parties other than Microsoft, see https://aka.ms/template-3pn for details.

SNIP

Restore succeeded.

Then I'll add a reference to the WebOptimizer package. Be sure to check the versioning and pick the one you want, or use the latest.

C:\Users\scott\Desktop\squishyweb> dotnet add package LigerShark.WebOptimizer.Core --version 1.0.178-beta 

Add the service in ConfigureServices and add it (I'll do it conditionally, only when in Production) in Configure. Notice I had to put it before UseStaticFiles() because I want it to get the first chance at those requests.

public void ConfigureServices(IServiceCollection services)

{
services.AddMvc();
services.AddWebOptimizer();
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseWebOptimizer();
app.UseExceptionHandler("/Home/Error");
}

app.UseStaticFiles();

app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}

After running "dotnet run" I'll request site.css as an example and see it's automatically minimized:

CSS minification automatically

You can control the pipeline with globbing like this:

public void ConfigureServices(IServiceCollection services)

{
services.AddMvc();
services.AddWebOptimizer(pipeline =>
{
pipeline.MinifyJsFiles("js/a.js", "js/b.js", "js/c.js");
});
}

If I wanted to combine some files into an output "file" that'll be held/cached only in memory, I can do that also. To be clear, it'll never touch the disk, it's just a URL. Then I can just refer to it with a <link> within my Razor Page or main Layout.

services.AddWebOptimizer(pipeline =>

{
pipeline.AddCssBundle("/css/mybundle.css", "css/*.css");
});

WebOptimizer also supports automatic "cache busting" with a ?v= query string created by a TagHelper. It can even compile Scss (Sass) files into CSS. There's plugins for TypeScript, Less, and Markdown too!

WebOptimizer is open source and lives at https://github.com/ligershark/WebOptimizer. Go check it out, kick the tires, and see if it meets your needs! Maybe get involved and make a fix or help with docs! There are already some open issues you could start helping with.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     

Lightweight bundling, minifying, and compression, for CSS and JavaScript with ASP.NET Core and Smidge

$
0
0

Yesterday I blogged about WebOptimizer, a minifier that Mads Kristensen wrote for ASP.NET Core. A few people mentioned that Shannon Deminick also had a great minifier for ASP.NET Core. Shannon has a number of great libraries on his GitHub https://github.com/Shazwazza including not just "Smidge" but also Examine, an indexing system, ClientDependency for managing all your client side assets, and Articulate, a blog engine built on Umbraco.

Often when there's more than one way to do things, but one of the ways is made by a Microsoft employee like Mads - even if it's in his spare time - it can feel like inside baseball or an unfair advantage. The same would apply if I made a node.js library but a node.js core committer also made a similar one. Many things can affect whether an open source library "pops," and it's not always merit. Sometimes it's locale/location, niceness of docs, marketing, word of mouth, website. Both Mads and Shannon and a dozen other people are all making great libraries and useful stuff. Sometimes people are aware of other projects and sometimes they aren't. At some point a community wants to "pick a winner" but even as I write this blog post, someone else we haven't met yet is likely making the next great bundler/minifier. And that's OK!

I'm going to take a look at Shannon Deminck's "Smidge" in this post. Smidge has been around as a runtime bundler since the beginning of ASP.NET Core even back when DNX was a thing, if you remember that. Shannon's been updating the library as ASP.NET Core has evolved, and it's under active development.

Smidge supports minification, combination, compression for JS/CSS files and features a fluent syntax for creating and configuring bundles

I'll start from "dotnet new mvc" and then:

C:\Users\scott\Desktop\smidgenweb>dotnet add package smidge

Writing C:\Users\scott\AppData\Local\Temp\tmp325B.tmp
info : Adding PackageReference for package 'smidge' into project 'C:\Users\scott\Desktop\smidgenweb\smidgenweb.csproj'.
log : Restoring packages for C:\Users\scott\Desktop\smidgenweb\smidgenweb.csproj...
...SNIP...
log : Installing Smidge 3.0.0.
info : Package 'smidge' is compatible with all the specified frameworks in project 'C:\Users\scott\Desktop\smidgenweb\smidgenweb.csproj'.
info : PackageReference for package 'smidge' version '3.0.0' added to file 'C:\Users\scott\Desktop\smidgenweb\smidgenweb.csproj'.

Then I'll update appSettings.json (where logging lives) and add Smidge's config:

{

"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Warning"
}
},
"smidge": {
"dataFolder" : "App_Data/Smidge",
"version" : "1"
}
}

Let me squish my CSS, so I'll make a bundle:

app.UseSmidge(bundles =>

{
bundles.CreateCss("my-css", "~/css/site.css");
});

I refer to the bundle by name and the Smidge tag helper turns this:

<link rel="stylesheet" href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~my-css" /> 

into this

<link href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~https://www.hanselman.com/sb/my-css.css.v1" rel="stylesheet" />

Notice the generated filename with version embedded. That bundle could be one or more files, a whole folder, whatever you need.

Her eyou can see Kestral handling the request. Smidge jumps in there and does its thing, then the bundle is cached for the next request!

info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]

Executing action method Smidge.Controllers.SmidgeController.Bundle (Smidge) with arguments (Smidge.Models.BundleRequestModel) - ModelState is Valid
dbug: Smidge.Controllers.SmidgeController[0]
Processing bundle 'my-css', debug? False ...
dbug: Smidge.FileProcessors.PreProcessManager[0]
Processing file '/css/site.css', type: Css, cacheFile: C:\Users\scott\Desktop\smidgenweb\App_Data\Smidge\Cache\SONOFHEXPOWER\1\bb8368ef.css, watching? False ...
dbug: Smidge.FileProcessors.PreProcessManager[0]
Processed file '/css/site.css' in 19ms
dbug: Smidge.Controllers.SmidgeController[0]
Processed bundle 'my-css' in 73ms
info: Microsoft.AspNetCore.Mvc.Internal.VirtualFileResultExecutor[1]
Executing FileResult, sending file

The minified results are cached wherever you want (remember I said App_Data):

Compressed JS and CSS

This is a SUPER simple example. You can use Smidge's fluent interface to affect how an individual bundle is created and behaves:

bundles.CreateJs("test-bundle-3", "~/Js/Bundle3")

.WithEnvironmentOptions(BundleEnvironmentOptions.Create()
.ForDebug(builder => builder
.EnableCompositeProcessing()
.EnableFileWatcher()
.SetCacheBusterType<AppDomainLifetimeCacheBuster>()
.CacheControlOptions(enableEtag: false, cacheControlMaxAge: 0))
.Build()
);

Smidge is unique in its Custom Pre-Processing Pipeline. Similar to ASP.NET Core itself, if there's anything you don't like or any behavior you want to change, you can.

I'm sure Shannon would appreciate help in Documentation and Open Issues, so go check out Smidge at https://github.com/Shazwazza/Smidge!


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     

Announcing Visual Studio and Kubernetes – Visual Studio Connected Environment

$
0
0

I've been having all kinds of fun lately with Kubernetes, exploring building my own Kubernetes Cluster on the metal, as well as using a managed Kubernetes cluster in Azure with AKS.

Today at the Connect() conference in NYC I was happy to announce Visual Studio Connected Environment. How would one take the best of Visual Studio and the best of managed Kubernetes and create something useful for development teams?

Ecosystem momentum behind containers is amazing right now with support for containers across clouds, operating systems, and development platforms. Additionally, while microservices as an architectural pattern has been around for years, more and more developers are discovering the advantages every day.

You can check out videos of the Connect() conference at https://www.microsoft.com/connectevent, but you should check out my practice video where I show a live demo of Kubernetes in Visual Studio:

The buzzword "cloud native" is thrown around a lot. It's a meaningful term, though, as it means "architecture with the cloud in mind." Applications that are cloud-native should consider these challenges:

  • Connecting to and leveraging cloud services
    • Use the right cloud services for your app, don't roll your own DB, Auth, Discovery, etc.
  • Dealing with complexity and staying cognizant of changes
    • Stubbing out copies of services can increase complexity and hide issues when your chain of invocations grows. K.I.S.S.
  • Setting up and managing infrastructure and dealing with changing pre-requisites
    • Even though you may have moved to containers for production, is your dev environment as representative of prod as possible?
  • Establishing consistent, common environments
    • Setting up private environments can be challenging, and it gets messier when you need to manage your local env, your team dev, staging, and ultimately prod.
  • Adopting best practices such as service discovery and secrets management
    • Keep secrets out of code, this is a solved problem. Service discovery and lookup should be straightforward and reliable in all environments.

A lot of this reminds us to use established and mature best practices, and avoid re-inventing the wheel when one already exists.

The announcements at Connect() are pretty cool because they're extending both VS and the Azure cloud to work like devs work AND like devops works. They're extending the developers’ IDE/editor experience into the cloud with services built on top of the container orchestration capabilities of Kubernetes on Azure. Visual Studio, VS Code and Visual Studio for Mac AND and through a CLI (command line interface) - they'll initially support .NET Core, node.js and Java on Linux. As Azure adds more support for Windows containers in Kubernetes, they'll enable .NET Full Framework applications. Given the state of Windows containers support in the platform, the initial focus is on green field development scenarios but lift-shift and modernize will come later.

It took me a moment to get my head around it (be sure to watch the video!) but it's pretty amazing. Your team has a shared development environments with your containers living in, and managed by Kubernetes. However, you also have your local development machine which then can reserve its own spaces for those services and containers that you're working on. You won't break the team with the work you're doing, but you'll be able to see how your services work and interact in an environment that is close to how it will look in production.

PLUS, you can F5 debug from Visual Studio or Visual Studio Code and debug, live in the cloud, in Kubernetes, as fast as you could locally.

Shared Development Environment

This positions Kubernetes as the underlayment for your containers, with the backplane managed by Azure/AKS, and the development experience behaving the way it always has. You use Visual Studio, or Visual Studio code, or the command line, and you use the languages and platforms that you prefer. In the demo I switch between .NET Core/C# and Node, VS and VSCode, no problem.

I, for one, look forward to our containerized future, and I hope you check it out as well!

You can sign up for the preview at http://aka.ms/signup-vsce


Sponsor: Why miss out on version controlling your database? It’s easier than you think because SQL Source Control connects your database to the same version control tools you use for applications. Find out how.



© 2017 Scott Hanselman. All rights reserved.
     

Docker and Linux Containers on Windows, with or without Hyper-V Virtual Machines

$
0
0

Containers are lovely, in case you haven't heard. They are a nice and clean way to get a reliable and guaranteed deployment, no matter the host system.

If I want to run my my ASP.NET Core application, I can just type "docker run -p 5000:80 shanselman/demos" at the command line, and it'll start up! I don't have any concerns that it won't run. It'll run, and run well.

Some containers naysayers say , sure, we could do the same thing with Virtual Machines, but even today, a VHD (virtual hard drive) is rather an unruly thing and includes a ton of overhead that a container doesn't have. Containers are happening and you should be looking hard at them for your deployments.

docker run shanselman/demos

Historically on Windows, however, Linux Containers run inside a Hyper-V virtual machine. This can be a good thing or a bad thing, depending on what your goals are. Running Containers inside a VM gives you significant isolation with some overhead. This is nice for Servers but less so for my laptop. Docker for Windows hides the VM for the most part, but it's there. Your Container runs inside a Linux VM that runs within Hyper-V on Windows proper.

HyperV on Windows

With the latest version of Windows 10 (or 10 Server) and the beta of Docker for Windows, there's native Linux Container support on Windows. That means there's no Virtual Machine or Hyper-V involved (unless you want), so Linux Containers run on Windows itself using Windows 10's built in container support.

For now you have to switch "modes" between Hyper V and native Containers, and you can't (yet) run Linux and Windows Containers side by side. The word on the street is that this is just a point in time thing, and that Docker will at some point support running Linux and Windows Containers in parallel. That's pretty sweet because it opens up all kinds of cool hybrid scenarios. I could run a Windows Server container with an .NET Framework ASP.NET app that talks to a Linux Container running Redis or Postgres. I could then put them all up into Kubernetes in Azure, for example.

Once I've turned Linux Containers on Windows on within Docker, everything just works and has one less moving part.

Linux Containers on Docker

I can easily and quickly run busybox or real Ubuntu (although Windows 10 already supports Ubuntu natively with WSL):

docker run -ti busybox sh

More useful even is to run the Azure Command Line with no install! Just "docker run -it microsoft/azure-cli" and it's running in a Linux Container.

Azure CLI in a Container

I can even run nyancat! (Thanks Thomas!)

docker run -it supertest2014/nyan

nyancat!

Speculating - I look forward to the day I can run "minikube start --vm-driver="windows" (or something) and easily set up a Kubernetes development system locally using Windows native Linux Container support rather than using Hyper-V Virtual Machines, if I choose to.


Sponsor: Why miss out on version controlling your database? It’s easier than you think because SQL Source Control connects your database to the same version control tools you use for applications. Find out how.


© 2017 Scott Hanselman. All rights reserved.
     

Trying out new .NET Core Alpine Docker Images

$
0
0

Docker ContainersI blogged recently about optimizing .NET and ASP.NET Docker files sizes. .NET Core 2.0 has previously been built on a Debian image but today there is preview image with .NET Core 2.1 nightlies using Alpine. You can read about the announcement here about this new Alpine preview image. There's also a good rollup post on .NET and Docker.

They have added two new images:

  • 2.1-runtime-alpine
  • 2.1-runtime-deps-alpine

Alpine support is part of the .NET Core 2.1 release. .NET Core 2.1 images are currently provided at the microsoft/dotnet-nightly repo, including the new Alpine images. .NET Core 2.1 images will be promoted to the microsoft/dotnet repo when released in 2018.

NOTE: The -runtime-deps- image contains the dependancies needed for a .NET Core application, but NOT the .NET Core runtime itself. This is the image you'd use if your app was a self-contained application that included a copy of the .NET Core runtime. This is apps published with -r [runtimeid]. Most folks will use the -runtime- image that included the full .NET Core runtime. To be clear:

- The runtime image contains the .NET Core runtime and is intended to run Framework-Dependent Deployed applications - see sample

- The runtime-deps image contains just the native dependencies needed by .NET Core and is intended to run Self-Contained Deployed applications - see sample

It's best with .NET Core to use multi-stage build files, so you have one container that builds your app and one that contains the results of that build. That way you don't end up shipping an image with a bunch of SDKs and compilers you don't need.

NOTE: Read this to learn more about image versions in Dockerfiles so you can pick the right tag and digest for your needs. Ideally you'll pick a docker file that rolls forward to include the latest servicing patches.

Given this docker file, we build with the SDK image, then publish, and the result is about 219megs.

FROM microsoft/dotnet:2.0-sdk as builder  


RUN mkdir -p /root/src/app/dockertest
WORKDIR /root/src/app/dockertest

COPY dockertest.csproj .
RUN dotnet restore ./dockertest.csproj

COPY . .
RUN dotnet publish -c release -o published

FROM microsoft/dotnet:2.0.0-runtime

WORKDIR /root/
COPY --from=builder /root/src/app/dockertest/published .
ENV ASPNETCORE_URLS=http://+:5000
EXPOSE 5000/tcp
CMD ["dotnet", "./dockertest.dll"]

Then I'll save this as Dockerfile.debian and build like this:

> docker build . -t shanselman/dockertestdeb:0.1 -f dockerfile.debian

With a standard ASP.NET app this image ends up being 219 megs.

Now I'll just change one line, and use the 2.1 alpine runtime

FROM microsoft/dotnet-nightly:2.1-runtime-alpine

And build like this:

> docker build . -t shanselman/dockertestalp:0.1 -f dockerfile.alpine

and compare the two:

> docker images | find /i "dockertest"

shanselman/dockertestalp 0.1 3f2595a6833d 16 minutes ago 82.8MB
shanselman/dockertestdeb 0.1 0d62455c4944 30 minutes ago 219MB

Nice. About 83 megs now rather than 219 megs for a Hello World web app. Now the idea of a microservice is more feasible!

Please do head over to the GitHub issue here https://github.com/dotnet/dotnet-docker-nightly/issues/500 and offer your thoughts and results as you test these Alpine images. Also, are you interested in a "-debian-slim?" It would be halfway to Alpine but not as heavy as just -debian.

Lots of great stuff happening around .NET and Docker. Be sure to also check out Jeff Fritz's post on creating a minimal ASP.NET Core Windows Container to see how you can squish .(full) Framework applications running on Windows containers as well. For example, the Windows Nano Server images are just 93 megs compressed.


Sponsor: Get the latest JetBrains Rider preview for .NET Core 2.0 support, Value Tracking and Call Tracking, MSTest runner, new code inspections and refactorings, and the Parallel Stacks view in debugger.



© 2017 Scott Hanselman. All rights reserved.
     

Writing smarter cross-platform .NET Core apps with the API Analyzer and Windows Compatibility Pack

$
0
0

.NET Core is Open Source and Cross PlatformThere's a couple of great utilities that have come out in the last few weeks in the .NET Core world that you should be aware of. They are deeply useful when porting/writing cross-platform code.

.NET API Analyzer

First is the API Analyzer. As you know, APIs sometimes get deprecated, or you'll use a method on Windows and find it doesn't work on Linux. The API Analyzer is a Roslyn (remember Roslyn is the name of the C#/.NET compiler) analyzer that's easily added to your project as a NuGet package. All you have to do is add it and you'll immediately start getting warnings and/or squiggles calling out APIs that might be a problem.

Check out this quick example. I'll make a quick console app, then add the analyzer. Note the version is current as of the time of this post. It'll change.

C:\supercrossplatapp> dotnet new console

C:\supercrossplatapp> dotnet add package Microsoft.DotNet.Analyzers.Compatibility --version 0.1.2-alpha

Then I'll use an API that only works on Windows. However, I still want my app to run everywhere.

static void Main(string[] args)

{
Console.WriteLine("Hello World!");

if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
{
var w = Console.WindowWidth;
Console.WriteLine($"Console Width is {w}");
}
}

Then I'll "dotnet build" (or run, which implies build) and I get a nice warning that one API doesn't work everywhere.

C:\supercrossplatapp> dotnet build


Program.cs(14,33): warning PC001: Console.WindowWidth isn't supported on Linux, MacOSX [C:\Users\scott\Desktop\supercr
ossplatapp\supercrossplatapp.csproj]
supercrossplatapp -> C:\supercrossplatapp\bin\Debug\netcoreapp2.0\supercrossplatapp.dll

Build succeeded.

Olia from the .NET Team did a great YouTube video where she shows off the API Analyzer and how it works. The code for the API Analyzer up here on GitHub. Please leave an issue if you find one!

Windows Compatibility Pack for .NET Core

Second, the Windows Compatibility Pack for .NET Core is a nice piece of tech. When .NET Core 2.0 come out and the .NET Standard 2.0 was finalized, it included over 32k APIs that made it extremely compatible with existing .NET Framework code. In fact, it's so compatible, I was able to easily take a 15 year old .NET app and port it over to .NET Core 2.0 without any trouble at all.

They have more than doubled the set of available APIs from 13k in .NET Standard 1.6 to 32k in .NET Standard 2.0.

.NET Standard 2.0 is cool because it's supported on the following platforms:

  • .NET Framework 4.6.1
  • .NET Core 2.0
  • Mono 5.4
  • Xamarin.iOS 10.14
  • Xamarin.Mac 3.8
  • Xamarin.Android 7.5

When you're porting code over to .NET Core that has lots of Windows-specific dependencies, you might find yourself bumping into APIs that aren't a part of .NET Standard 2.0. So, there's a new (preview) Microsoft.Windows.Compatibility NuGet package that "provides access to APIs that were previously available only for .NET Framework."

There will be two kinds of APIs in the Compatibility Pack. APIs that were a part of Windows originally but can work cross-platform, and APIs that will always be Windows only, because they are super OS-specific. APIs calls to the Windows Registry will always be Windows-specific, for example. But the System.DirectoryServices or System.Drawing APIs could be written in a way that works anywhere. The Windows Compatibility Pack adds over 20,000 more APIs, on top of what's already available in .NET Core. Check out the great video that Immo shot on the compat pack.

The point is, if the API that is blocking you from using .NET Core is now available in this compat pack, yay! But you should also know WHY you are pointing to .NET Core. Work continues on both .NET Core and .NET (Full) Framework on Windows. If your app works great today, there's no need to port unless you need a .NET Core specific feature. Here's a great list of rules of thumb from the docs:

Use .NET Core for your server application when:+

  • You have cross-platform needs.
  • You are targeting microservices.
  • You are using Docker containers.
  • You need high-performance and scalable systems.
  • You need side-by-side .NET versions per application.

Use .NET Framework for your server application when:

  • Your app currently uses .NET Framework (recommendation is to extend instead of migrating).
  • Your app uses third-party .NET libraries or NuGet packages not available for .NET Core.
  • Your app uses .NET technologies that aren't available for .NET Core.
  • Your app uses a platform that doesn’t support .NET Core.

Finally, it's worth pointing out a few other tools that can aid you in using the right APIs for the job.

Enjoy!


Sponsor: Get the latest JetBrains Rider preview for .NET Core 2.0 support, Value Tracking and Call Tracking, MSTest runner, new code inspections and refactorings, and the Parallel Stacks view in debugger.


© 2017 Scott Hanselman. All rights reserved.
     

How to download embedded videos with F12 Tools in your browser

$
0
0

I got an email this week asking how to download some of my Azure Friday video podcast videos from http://friday.azure.com as well as some of the Getting Started Videos from Azure.com.

NOTE: Respect copyright and consider what you’re doing and WHY before you use this technique to download videos that may have been embedded for a reason.

I told them to download the videos with F12 tools, and they weren't clear how. I'll use an Azure Friday video for the example. Do be aware that there are a ton of ways to embed video on the web and this doesn't get around ones that REALLY don't want to be downloaded. This won't help you with Netflix, Hulu, etc.

First, I'll visit the site with the video I want in my browser. I'll use Chrome but this also works in Edge or Firefox with slightly different menus.

Then press F12 to bring up the Developer Tools pane and click Network. In Edge, click Content Type, then Media.

Download embedded videos with F12

Click the "clear" button to set up your workspace. That's the International No button there in the Network pane. Now, press Play and get ready.

Look in the Media list for something like ".mp4" or something that looks like the video you want. It'll likely have an HTTP Response in the 20x range.

Download 200

In Chrome, right click on the URL and select Copy as CURL. If you're on Windows pick cmd.exe and bash if you're on Linux/Mac.

Downloading with CURL

You'll get a crazy long command put into your clipboard. It's not all needed but it's a very convenient feature the browser provides, so it's worth using.

Get Curl: If you don't have the "curl" command you'll want to download "curl.exe" from here https://curl.haxx.se/dlwiz/ and, if you like, put it in your PATH. If you have Windows, get the free bundled curl version with installer here.

Open a terminal/command prompt - run cmd.exe on Windows - and paste in the command. If the browser you're using only gives you the URL and not the complete "curl" command, the command you're trying to build is basically curl [url] -o [outputfile.mp4]. It's best if you can get the complete command like the one Chrome provides, as it may include authentication cookies or other headers that omitting may prevent your download from working.

image

BEFORE you press enter, make sure you add "-o youroutputfilename.mp4." Also, if you can an error about security and certificates, you may need to add "--insecure."

Downloading a streaming video file with CURL

In the screenshot above I'm saving the file as "test.mp4" on my desktop.

There are several ways to download embedded videos, including a number of online utilities that come and go, but this technique has been very reliable for me.


Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today



© 2017 Scott Hanselman. All rights reserved.
     

Azure Cloud Shell - your own bash shell and container - right inside Visual Studio Code

$
0
0

Visual Studio Code has a HUGE extension library. There's also almost two dozen very nice Azure specific extensions as well as extensions for Docker, etc. If you write an Azure extension yourself, you can depend on the Azure Account Extension to handle the administrivia of the user logging into Azure and selecting their subscription. And of course, the Azure Account Extension is open source.

Here's the cool part - I think, since I just learned it. You can have the Azure Account Extension installed (again, you can install it directly or you can get it as a dependency) you also get the ability to get an Azure Cloud Shell directly inside VS Code. That means a little container spins up in the Cloud and you can get a real bash shell or a real PowerShell shell quickly. AND the Azure Cloud Shell automatically is logged in as you and already has a ton of tools pre-installed.

Here's how you do it.

VS Code Command Palette

It will pop up a message with a "copy & open" button. It'll launch a browser, then you enter a special code after logging into Azure to OAuth VS Code into your Account account.

image

At this point, open a Cloud Shell with Shift-Ctrl-P and type "Bash" or "PowerShell"...it'll autocomplete so you can type a lot less, or setup a hotkey.

Your Cloud Shell will appear along side your local terminals!

Azure Cloud Shell in VS Code

Note that there's a "clouddrive" folder mapped to your Azure Storage so you can keep stuff in there. Even though the Shell goes away in about 20 min of non-use, your stuff (scripts, whatever) is persisted.

image

There's a bunch of tools preinstalled you can use as well!

scott@Azure:~$ node --version

v6.9.4
scott@Azure:~$ dotnet --version
2.0.0
scott@Azure:~$ git --version
git version 2.7.4
scott@Azure:~$ python --version
Python 3.5.2
scott@Azure:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial

And finally, when you type "azure" or "az" for the various Azure CLI (Command Line Interface) tools, you'll find you're already authenticated/logged into Azure, so you can create VMs, list websites, manage Kubenetes clusters, all from within VS Code. I'm still exploring, but I'm enjoying what I'm seeing.


Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today



© 2017 Scott Hanselman. All rights reserved.
     

Accelerated 3D VR, sure, but impress me with a nice ASCII progress bar or spinner

$
0
0

I'm glad you have a 1080p 60fps accelerated graphics setup, but I'm old school. Impress me with a really nice polished ASCII progress bar or spinner!

I received two tips this week about cool .NET Core ready progress bars so I thought I'd try them out.

ShellProgressBar by Martijn Laarman

This one is super cool. It even supports child progress bars for async stuff happening in parallel! It's very easy to use. I was able to get a nice looking progress bar going in minutes.

static void Main(string[] args)

{
const int totalTicks = 100;
var options = new ProgressBarOptions
{
ForegroundColor = ConsoleColor.Yellow,
ForegroundColorDone = ConsoleColor.DarkGreen,
BackgroundColor = ConsoleColor.DarkGray,
BackgroundCharacter = '\u2593'
};
using (var pbar = new ProgressBar(totalTicks, "Initial message", options))
{
pbar.Tick(); //will advance pbar to 1 out of 10.
//we can also advance and update the progressbar text
pbar.Tick("Step 2 of 10");
TickToCompletion(pbar, totalTicks, sleep: 50);
}
}

Boom.

Cool ASCII Progress Bars in .NET Core

Be sure to check out the examples for ShellProgressBar, specifically ExampleBase.cs where he has some helper stuff like TickToCompletion() that isn't initially obvious.

Kurukuru by Mayuki Sawatari

Another nice progress system that is in active development for .NET Core (like super active...I can see they updated code an hour ago!) is called Kurukuru. This code is less about progress bars and more about spinners. It's smart about Unicode vs. non-Unicode as there's a lot of cool characters you could use in a Unicode-aware console that make for attractive spinners.

What a lovely ASCII Spinner in .NET Core!

Kurukuru is also super easy to use and integrated into your code. It also uses the "using" disposable pattern in a clever way. Wrap your work and if you throw an exception, it will show a failed spinner.

Spinner.Start("Processing...", () =>

{
Thread.Sleep(1000 * 3);

// MEMO: If you want to show as failed, throw a exception here.
// throw new Exception("Something went wrong!");
});

Spinner.Start("Stage 1...", spinner =>
{
Thread.Sleep(1000 * 3);
spinner.Text = "Stage 2...";
Thread.Sleep(1000 * 3);
spinner.Fail("Something went wrong!");
});

TIP: If your .NET Core console app wants to use an async Main (like I did) and call Kurukuru's async methods, you'll want to indicate you want to use the latest C# 7.1 features by adding this to your project's *.csproj file:

<PropertyGroup>
    <LangVersion>latest</LangVersion>
</PropertyGroup>

This allowed me to do this:

public static async Task Main(string[] args)

{
Console.WriteLine("Hello World!");
await Spinner.StartAsync("Stage 1...", async spinner =>
{
await Task.Delay(1000 * 3);
spinner.Text = "Stage 2...";
await Task.Delay(1000 * 3);
spinner.Fail("Something went wrong!");
});
}

Did I miss some? I'm sure I did. What nice ASCII progress bars and spinners make YOU happy?

And again, as with all Open Source, I encourage you to HELP OUT! I know the authors would appreciate it.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     

The 2017 Christmas List of Best STEM Toys for kids

$
0
0

In 2016 and 2015 I made a list of best Christmas STEM Toys for kids! If I may say so, they are still good lists today, so do check them out. Be aware I use Amazon referral links so I get a little kickback (and you support this blog!) when you use these links. I'll be using the pocket money to...wait for it...buy STEM toys for kids! So thanks in advance!

Here's a Christmas List of things that I've either personally purchased, tried for a time, or borrowed from a friend. These are great toys and products for kids of all genders and people of all ages.

Piper Computer Kit with Minecraft Raspberry Pi edition

The Piper is a little spendy at first glance, but it's EXTREMELY complete and very thoughtfully created. Sure, you can just get a Raspberry Pi and hack on it - but the Piper is not just a Pi. It's a complete kit where your little one builds their own wooden "laptop" box (more of a luggable), and then starting with just a single button, builds up the computer. The Minecraft content isn't just vanilla Microsoft. It's custom episodic content! Custom voice overs, episodes, and challenges.

What's genius about Piper, though, is how the software world interacts with the hardware. For example, at one point you're looking for treasure on a Minecraft beach. The Piper suggests you need a treasure detector, so you learn about wiring and LEDs and wire up a treasure detector LED while it's running. Then you run your Minecraft person around while the LED blinks faster to detect treasure. It's absolute genius. Definitely a favorite in our house for the 8-12 year old set.

Piper Raspberry Pi Kit

Suspend! by Melissa and Doug

Suspend is becoming the new Jenga for my kids. The game doesn't look like much if you judge a book by its cover, but it's addictive and my kids now want to buy a second one to see if they can build even higher. An excellent addition to family game night.

Suspend! by Melissa and Doug

Engino Discovering Stem: Levers, Linkages & Structures Building Kit

I love LEGO but I'm always trying new building kids. Engino is reminiscent of Technics or some of the advanced LEGO elements, but this modestly priced kit is far more focused - even suitable for incorporating into home schooling.

Engino Discovering Stem: Levers, Linkages & Structures Building Kit

Gravity Maze

I've always wanted a 3D Chess Set. Barring that, check out Gravity Maze. It's almost like a physical version of a well-designed iPad game. It included 60 challenges (levels) that you then add pieces to in order to solve. It gets harder than you'd think, fast! If you like this, also check out Circuit Maze.

818Ly6yML

Osmo Genius Kit (2017)

Osmo is an iPad add-on that takes the ingenious idea of an adapter that lets your iPad see the tabletop (via a mirror/lens) and then builds on that clever concept with a whole series of games, exercises, and core subject tests. It's best for the under 12 set - I'd say it's ideal for about 6-8 year olds.

81iVPligcyL


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     

Setting up a managed container cluster with AKS and Kubernetes in the Azure Cloud running .NET Core in minutes

$
0
0

After building a Raspberry Pi Kubernetes Cluster, I wanted to see how quickly I could get up to speed on Kubernetes in Azure.

  • I installed the Azure CLI (Command Line Interface) in a few minutes - works on Windows, Mac or Linux.
    • I also remembered that I don't really need to install anything locally. I could just use the Azure Cloud Shell directly from within VS Code. I'd get a bash shell, Azure CLI, and automatically logged in without doing anything manual.
    • Anyway, while needlessly installing the Azure CLI locally, I read up on the Azure Container Service (AKS) here. There's walkthrough for creating an AKS Cluster here. You can actually run through the whole tutorial in the browser with an in-browser shell.
  • After logging in with "az login" I made a new resource group to hold everything with "az group create -l centralus -n aks-hanselman." It's in the centralus and it's named aks-hanselman.
  • Then I created a managed container service like this:
    C:\Users\scott\Source>az aks create -g aks-hanselman -n hanselkube --generate-ssh-keys
    / Running ...
  • This runs for a few minutes while creating, then when it's done, I can get ahold of the credentials I need with
    C:\Users\scott\Source>az aks get-credentials --resource-group aks-hanselman --name hanselkube
    Merged "hanselkube" as current context in C:\Users\scott\.kube\config
  • I can install Kubenetes CLI "kubectl" easily with "az aks install-cli"
    Then list out the nodes that are ready to go!
    C:\Users\scott\Source>kubectl get nodes
    NAME                       STATUS    ROLES     AGE       VERSION
    aks-nodepool1-13823488-0   Ready     agent     1m        v1.7.7
    aks-nodepool1-13823488-1   Ready     agent     1m        v1.7.7
    aks-nodepool1-13823488-2   Ready     agent     1m        v1.7.7

A year ago, Glenn Condron and I made a silly web app while recording a Microsoft Virtual Academy. We use it for demos and to show how even old (now over a year) containers can still be easily and reliably deployed. It's up at https://hub.docker.com/r/glennc/fancypants/.

I'll deploy it to my new Kubernetes Cluster up in Azure by making this yaml file:

apiVersion: apps/v1beta1

kind: Deployment
metadata:
name: fancypants
spec:
replicas: 1
template:
metadata:
labels:
app: fancypants
spec:
containers:
- name: fancypants
image: glennc/fancypants:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: fancypants
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: fancypants

I saved it as fancypants.yml, then run kubectl create -f fancypants.yml.

I can run kubectl proxy and then hit http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/overview?namespace=default to look at the Kubernetes Dashboard, proxyed locally, but all running in Azure.

image

When fancypants is created and deployed, then I can find out its external IP with:

C:\Users\scott\Sources>kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fancypants LoadBalancer 10.0.116.145 52.165.232.77 80:31040/TCP 7m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 18m

There's my IP, I hit it and boom, I've got fancypants in the managed cloud. I only have to pay for the VMs I'm using, and not for the VM that manages Kubernetes. That means the "kube-system" namespace is free, I pay for other namespaces like my "default" one.

image

Best part? When I'm done, I can just delete the resource group and take it all away. Per minute billing.

C:\Users\scott\Sources>az group delete -n aks-hanselman --yes

Super fun and just took about 30 min to install, read about, try it out, write this blog post, then delete. Try it yourself!


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     

Visualizing your real-time blood sugar values AND a Git Prompt on Windows PowerShell and Linux Bash

$
0
0

imageMy buddy Nate become a Type 1 Diabetic a few weeks back. It sucks...I've been one for 25 years. Nate is like me - an engineer - and the one constant with all engineers that become diabetic, we try to engineer our ways out of it. ;) I use an open source artificial pancreas system with an insulin pump and continuous glucose system. At the heart of that system is some server-side software called Nightscout that has APIs for managing my current and historical blood sugar. It's updated every 5 minutes, 24 hours a day.

I told Nate to get NightScout set up ASAP and start playing with the API. Yesterday he added his blood sugar to his terminal prompt! Love this. He uses Linux, but I use Linux (Ubuntu) on Windows 10, so I wanted to see if I could run his little node up from Windows (I'll make it a Windows service).

Yes, you can run cron jobs under Windows 10's Ubuntu, but only when there is an instance of bash running (the Linux subsystem shuts down when it's not used) and upstart doesn't work yet. I could run it from the .bashrc or use various hacks/workarounds to keep WSL (Windows Subsystem for Linux) running, but the benefit of running this as a Windows Service is that I can see my blood sugar in all prompts on Windows, like Powershell as well!

I'll use the "non-sucking service manager (NSSM)" to run Nate's non-Windows-service node app as a Windows service. I ran "nssm install nsprompt" and get this GUI. Then I add the --nightscout parameter and pass in my Nightscout blood sugar website. You'll get an error immediately when the service runs if this is wrong.

NSSM Service Installer

From the Log on tab, make sure the service is logged on as you. I login with my MSA (Microsoft Account) so I used my email address. This is to ensure that with the app writes to ~ on Windows, it's putting your sugars in c:\users\LOGGEDINUSER\.

Next, run the service with "sc start NSPrompt" or from the Services GUI.

My sugar updater runs in a Windows Service

Nate's node app gets blood sugar from Nightscout and puts it in ~/.bgl-cache. However, to be clear since I'm running it from the Windows side while changing the Bash/Ubuntu on Windows prompt from Linux, it's important to note that from WIndows ~/ is really c:\users\LOGGEDINUSER\ so I changed the Bash .profile to load the values from the Windows mnt'ed drives like this:

eval "$(cat /mnt/c/Users/scott/.bgl-cache)"

Also, you need to make sure that you're using a Unicode font in your console. For example, I like using Fira Code Light, but it doesn't have a single character ⇈ double-up arrow (U+21C8), so I replaced it with two singles. You get the idea. You need a font that has the glyphs you want and you need those glyphs displaying properly in your .profile text file.

You'll need a Unicode Font

And boom. It's glorious. My current blood sugar and trends in my prompt. Thanks Nate!

My sugars!

So what about PowerShell as well? I want to update that totally different prompt/world/environment/planet from the same file that's updated by the service. Also, I already have a custom prompt with Git details since I use Posh-Git from Keith Dahlby (as should you).

I can edit $profile.CurrentUserAllHosts with "powershell_ise $profile.CurrentUserAllHosts" and add a prompt function before "import-module posh-git."

Here's Nate's same prompt file, translated into a PowerShell prompt() method, chained with PoshGit. So I can now see my Git Status AND my Blood Sugar. My two main priorities!

NOTE: If you don't use posh-git, you can remove the "WriteVcsStatus" line and the "Import-Module posh-git" and you should be set!

function prompt {

Get-Content $ENV:USERPROFILE\.bgl-cache | %{$bgh = @{}} {if ($_ -match "local (.*)=""(.*)""") {$bgh[$matches[1]]=$matches[2].Trim();}}
$trend = "?"
switch ($bgh.nightscout_trend) { "DoubleUp" {$trend="↑↑"} "SingleUp" {$trend="↑"} "FortyFiveUp" {$trend="↗"} "Flat" {$trend="→"} "FortyFiveDown" {$trend="↘"} "SingleDown" {$trend="↓"} "DoubleDown" {$trend="↓↓"} }
$bgcolor = [Console]::ForegroundColor.ToString()
if ([int]$bgh.nightscout_bgl -ge [int]$bgh.nightscout_target_top) {
$bgcolor = "Yellow"
} ElseIf ([int]$bgh.nightscout_bgl -le [int]$bgh.nightscout_target_bottom) {
$bgcolor = "Red"
} Else {
$bgcolor = "Green"
}

Write-Host $bgh.nightscout_bgl -NoNewline -ForegroundColor $bgcolor
Write-Host $trend" " -NoNewline -ForegroundColor $bgcolor
[Console]::ResetColor()

$origLastExitCode = $LASTEXITCODE
Write-Host $ExecutionContext.SessionState.Path.CurrentLocation -NoNewline
Write-VcsStatus
$LASTEXITCODE = $origLastExitCode
"$('>' * ($nestedPromptLevel + 1)) "
}

Import-Module posh-git

Very cool stuff.

Blood Sugar and Git in PowerShell!

This concept, of course, could be expanded to include your heart rate, FitBit steps, or any health related metrics you'd like! Thanks Nate for the push to get this working on Windows!


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     

How to set up a 10" Touchscreen LCD for Raspberry Pi

$
0
0

HDMI TouchScreenI'm a big fan of the SunFounder tech kits (https://www.sunfounder.com), and my kids and I have built several Raspberry Pi projects with their module/sensor kits. This holiday vacation we have two project we're doing, that coincidentally use SunFounder parts. The first is the Model Car Kit that uses a Raspberry Pi to control DC motors AND (love this part) a USB camera. So it's not just a "drive the car around" project, it also can include computer vision. My son wants to teach it to search the house for LEGO bricks and alert an adult so they'll not step on it. We were thinking to have the car call out to Azure Cognitive Services, as their free tier has more than enough power for what we need.

For this afternoon, we are taking a 10.1" Touchscreen display and adding it to a Raspberry Pi. I like this screen because it works on pretty much anything that has HDMI, but it's got mounting holes on the back for any Raspberry Pi or a LattePanda or Beagle Bone. You can also use it for basically anything that can output HDMI, so it can be a small portable monitor/display for Android or iOS. It has 10 finger multitouch which is fab. The instructions aren't linked to from their product page, but I found them on their Wiki.

There are a lot of small LCDs you can get for a Pi project, from little 5" screens (for about $35) all the way up to this 10" one I'm using here. If you're going to mount your project on a wall or 3D print a box, a screen adds a lot. It's also a good way to teach kids about embedded systems. When my 10 year old saw the 5" screen and what it could do, he realized that the thermostat on the wall and/or the microwave ovens were embedded systems. Now he assumes every appliance is powered by a Raspberry Pi!

Sunfounder Controller board AND Raspberry Pi Mounted to the 10.1" Touchscreen Booting Windows 10 on a Raspberry Pi for no reason

Take a look at the pic at the top right of this post. That's not a Raspberry Pi, that's

the included controller board that interfaces with your tiny computer. It's include with the LCD package. That controller board also has an included power adapter that points out 12V at 1500Ma which allows it to also power the Pi itself. That means you can power the whole thing with a single power adapter.

There's also an optional touchscreen "matchbox" keyboard package you can install to get an on-screen visual keyboard. However, when I'm initially setting up a Raspberry Pi or I'm taking a few Pis on the road for demos and working in hotels, I through this little $11 keyboard/mouse combo in my bag. It's great for quick initial setup of a Raspberry Pi that isn't yet on the network.

Matchbox Touchscreen Keyboard

Once you've installed matchbox-keyboard you'll find it under MainMenu, Accessories, Keyboard. Works great!

* This post includes some referral links to Amazon.com. When you use these links, you not only support my blog, but you send a few cents/dollars my way that I use to pay for hosting and buy more gadgets like these! Thanks! Also, I have no relationship with SunFounder but I really like their stuff. Check out their site.


Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today!



© 2017 Scott Hanselman. All rights reserved.
     
Viewing all 1148 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>