Dell XPS 9350 + Ubuntu 16.04 (beta1, Feb 27th)


I have been DYING to try Mir every since it was announced. Moving X forward, new libinput integrated for a solid touchpad experience .. the entire thing is very exciting. Since I’m lucky enough to have a System76 Galago Ultrapro AND Dell XPS 13 9350 I figured I’d try to install 16.04beta1 (ubuntu/unity) on my XPS. I figured if it blew the laptop up I have another daily driver I can use.’

EDIT 2/28/2016 – I have the XPS 9350 1080p i5 no touchscreen model with broadcom wifi.

NOTE 4/7/2016: Read the comments for good feedback from others trying. Luis noted: “..before attempting to install one needs to boot and in the BIOS configure SATA-controller to AHCI (or Off).”. He also noted there’s a bug for the screen flicker issue.


I downloaded the nightly iso from here and burned it to a thumb drive with the ‘Startup Disk Creator’ app in Ubuntu. Plugged it into the XPS and rebooted, selected the USB drive and off I went.


Wifi was detected immediately during the installation, that was a huge sign for me since I have the Broadcom chip in my XPS, which historically has not been supported on linux at all until the 4.3/4.4 kernels. Good news is it’s working like a champ.


The display looks as good as ever, no issues at all during the installation.

Special Keys/Touchpad/Keyboard

Everything “just worked” during installation.

Installation Wrapup

Everything worked flawlessly. I even installed using UEFI (when I was running 15.10 I had turned this off in the BIOS and was using ‘legacy’ mode). Now onto the details of how it’s running after installation.



After installation the first thing I wanted to do was install Mir and test it out. I installed it using the following

sudo apt-get -y update && sudo apt-get -y dist-upgrade && sudo apt-get -y unity8-desktop-session-mir

On reboot I clicked the little ubuntu icon above and to the right of my username and changed it to the mir ‘8’ (looks like an 8-ball to me). I logged in and … nothing. Dunno what happened, I rebooted the laptop and logged in again and this time it logged in! (…..and looked terrible).

The resolution was screwed up, I couldn’t launch anything … to me it looked like it thought it was in some sort of phone or tablet mode. I have no idea and I couldn’t adjust anything so I quickly gave up. Oh well, I’ll keep trying as they get closer to release.

Other Software

The other things I normally do on a new installation are install chrome, dropbox, libinput.


You can quickly install chrome using the following

cd ~/Downloads
wget && sudo dpkg -i google-chrome-stable_current_amd64.deb && sudo apt-get -f -y install

I ended up needing the ‘apt-get -f install’ to correct dependencies … seemed to work fine and chrome was immediately available from the launcher.


You can quickly install dropbox using the following

cd ~/Downloads
wget -O - "" | tar xzf -
sudo dpkg -i dropbox_2015.10.28_amd64.deb && sudo apt-get -y install python-gtk2 && sudo apt-get -f -y install


You can use my steps from my StackOverflow post here. Vote my answer up if you used it. 🙂

External Monitor

I was hoping the external monitor worked, and does … sorta.

I bought this cable before and it worked… and still does. But I could only get the monitor to turn on when the laptop was unplugged from the power supply. And I had to unplug the external monitor cable from the laptop a couple times THEN it would come up. Then I could plug in the laptop to the power supply and everything worked fine (I’m using it now to type in this blog post).

So I certainly still recommend the cable, just make sure you unplug the power connector from the laptop, then plug the external monitor cable into the USB-C, then once the monitor is on you can plug the power connector back in.

It’s weird .. but it worked and I’m not complaining too much honestly.

UPDATE: 4/7/2016 – External monitor no longer works. Hoping by release things are back to working.


I’d call this a roaring success honestly. With the very notable exception of Mir, everything else on the laptop works 100% out of the box (including Unity 7). Mir will stabilize … but not having to mess around with wifi drivers or skylake processor issues by installing kernels by hand is a WONDERFUL success for 16.04 for me.

Let me know in the comments if you have good experiences installing 16.04.

EDIT: Well, after I rebooted I can’t get to a login screen. 🙂 I unplugged external monitor … unplugged power cable .. rebooted a couple times. No dice. THAT’S NOT GOOD!! 🙂 Nice thing is I can get to a terminal windows and it has wifi so I can keep updating with ‘apt-get’ and see how things evolve. So for now don’t upgrade if you want a working laptop … YOU HAVE BEEN WARNED!!

LAST EDIT! Everything is good. It was the libinput settings file. I adjusted my stackoverflow post to include a couple more lines and things are booting fine.

HTML5 and Docker


If you’ve ever messed with UI programming that utilized the ‘hash’ URL technique for invoking UI actions, it’s time to go html5 native!

This blog post will describe how to setup an apache docker container with the pieces enabled for URL Rewriting. This allows html5 ‘pushState’ to be utilized instead of hash operations. Some advantages HTML5 pushState offers over hash:

  • Cleaner looking for sharing URLs (http://…/#/employees/list vs http://…/employees/list)
  • Search Engine Optimized (SEO) due to being able to pre-generate/crawl your site and return things server-side whereas hash always needs to process on the client and is not SEO friendly

The major disadvantage is that it requires the server being involved .. that’s where this post comes in.

NOTE: For angular2, there is a debate going on for what the default should be, hash or html5. You can read about it here.


I have a little toy project I have been using for a few years for Spring Boot and angular to help me prove out concepts. I won’t go into any detail on the UI itself (I recently converted from angular1 -> angular2 and that’s what drove this blog post, but it’s applicable for any UI tech that utilizes hash URLs).

NOTE: All code is in the my git repo.

To try everything out you’ll need a recent version of docker installed. If you are using an ubuntu/debian based distro you can install it with:

$ curl -sL | sudo -E bash -
$ sudo apt-get install -y nodejs

From here you need a Dockerfile like-so:

# Pull base apache image
# ---------------
FROM httpd:2.4
# Maintainer
# ----------
MAINTAINER Jim Basilio <>
# Copy file in as daemon user
USER daemon
# httpd.conf turns on rewrite module and rewrites 404 errors to load index.html then redirect client
COPY ./httpd.conf /usr/local/apache2/conf/httpd.conf
USER root
# Define default command to start bash. 
CMD [ "httpd-foreground" ]

What this does is build a docker image and copy in httpd.conf that turns on the urlrewrite module and configures the rewrites when a 404 is encountered.

For example, when a user goes to your app at that allows index.html to load angular (or whatever UI framework you are running) and then angular takes over managing client routing. On a later session when a user ‘direct links’ to your app at, this is making a server request to apache (in this case) to serve the ‘users/list’ folder (looking for an index.html normally). However, we don’t have that folder on our server since it really was a UI route so a 404 would be returned from apache.

The urlrewrite rule is

    Options +FollowSymLinks
    IndexIgnore */*
    RewriteEngine On
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule (.*) index.html

I didn’t originally write the above, I took it from this SO gist. When the above is read by apache, any 404 errors that are found will rewrite the URL so index.html is served and then angular will perform the routing it needs. From the users perspective all of this is seamless and it just “does the right thing”. Without this in place, the users direct link would result in a “404 not found” and a very confused user. To start the container you can run:

$ sudo docker run -d -v <span class="pl-s"><span class="pl-pds">$(</span><span class="pl-c1">pwd</span><span class="pl-pds">)</span></span>:/usr/local/apache2/htdocs --name=html5-apache -p 8080:80 -t html5-apache

Be sure to customize the ‘$(pwd)’ with whatever the root is for your application.

That’s it! You now have a docker image you can use for any app that requires html5 pushState routing. This is a good alternative to using ‘lightweight’ dev http servers as seen with angular2 development stacks. You can start the container and let it run, adjusting the shared volume with your source code (i.e. delete the files, change the files, whatever) and when you reload your site everything will instantly update.

I’m using this setup for my hiit-frontend project which is working great. I can run an ‘npm run build’ and it’ll recompile all my typescript and move my site to the ‘dist’ folder. The dist folder is my shared volume to docker which I just leave running all the time (I intend on writing more about angular2 and the build stack I’m using there in another blog post).