SOUNDPEATS TRUENGINE2 In Ear Headphones (Happy Father’s Day!)

    June 23, 2020

    For father’s day (2020), my family got me a pair of SOUNDPEATS TRUENGINE2 in-ear headphones.

    I had a pair of Sony WF-1000X headphones years ago, and while functional, they left a lot to be desired for staying in your ear. They were IMPOSSIBLE to use while moving quickly, and at a walking pace they were just ‘ok’. Sound quality was generally poor but stability in your ear was terrible.

    I was a little apprehensive about these headphones, but I had used a generic ‘apple earpod’ type earphone from one of my sons and was amazed at how stable his were (even though they “weren’t the real thing“). They weighed seemingly nothing and were very stable for a run. Sound quality on these ‘earpods’ was poor, but they were good enough for podcasts (not so much music).

    The TRUEENGINE2’s I’m using are setup with the default ‘medium’ tip. They go in deep and I was concerned they might give me ear fatigue. However, I went on a 1.5 hour bike ride with them, did circuit training (pullups, pushups, dips, etc) and they never moved – perfectly stable and very comfortable, I pretty much forgot they were there! The sound quality was very good for podcasts and music (not as good as over the ear headphones, but orders of magnitude better than my previous Sony’s). They support bluetooth 5.0, aptX and all the trimmings. I connected to them with my Android Pixel XL.

    I’m still getting used to the tap controls .. they work fine but my memory of 1-tap, 2-tap, right and left taps is not strongly associated yet to their function – but that will come.

    All in all I think it’s an outstanding product!


    I did a podcast!

    February 13, 2017

    James Studdart had me on his podcast, Cynical Developer, to talk about Angular2. It was my first time doing a podcast .. and I got to do it 2 times because of technical difficulties for the first recording. 🙂 Which was actually good for me because the first take was pretty rough (for me) and I needed to organize my thoughts better.

    Have a listen if you’re interested, would love to hear your thoughts in the comments!


    hardware, linux

    XPS 13 9350 – 16.10 and Broadcom (DW 1820A) woes (with Intel 8260NGW fix!)

    November 6, 2016

    After upgrading to 16.10, when I would resume from sleep my xps 13 would reconnect to my network but not have any DNS (it had a connection, but couldn’t navigate to sites by name, IP addresses worked). I was able to issue a

    sudo systemctl restart network-manager

    And this generally brought the network back online properly .. but I was noticing my speed was TERRIBLE.

    I generally had great luck with my xps13 9350 with the broadcom wireless card (DW 1820A), although had read even the great Linus himself said his luck was terrible and he replaced it with an intel card. With 16.10 coming about and causing issues I decided to take the (small – approx $25) plunge and replace the card.

    I bought this intel 8260NGW 3rd generation wifi card and hoped everything would work out, it sure has!!

    First, taking the screws out of the laptop was a complete PITA. There are VERY small torx screws on the bottom of the laptop, luckily a small flathead jewelers screwdriver did the job … although I’d recommend something to help you grip since you need to torque the screwdriver quite a bit to get the screws to turn.

    PLEASE NOTE! Be very careful with your very expensive laptop. There’s nothing COMPLICATED about removing the back of the laptop, but be gentle. When you get the screws out, you’ll want to get something in between the frame and the metal panel, use something plastic (not metal, which will scrape), and once you have some leverage the entire back will pop off with a little pressure. There’s no connections on the back panel that will be pulled.

    With my handy rubber grip and screwdriver I removed the back of the laptop and could then remove the broadcom wifi adapter.

    I carefully pried the 2 antenna connections off the old card, put in the new card, and carefully applied pressure to the antenna connections to reset them. BE CAREFUL HERE!! You don’t want to ‘pop’ something with such small electronics.

    Once I plugged in the new card and snapped the back of the laptop back on … I fired up 16.10 and was greeted with mega fast speed (866 Mb/s)!

    Yay intel. 🙂


    How to flash your bios for Dell XPS 9350

    August 13, 2016

    Dell’s firmware flash process isn’t built for linux. They do have instructions posted here but this requires making a thumb drive bootable with FreeDOS. I tried these steps and didn’t have any luck.

    Instead, I wanted to summarize a simpler solution and one that’s BUILT INTO THE LAPTOP (why they recommend a process that involves more complicated steps, I have no idea…).

    To update the firmware, perform the following steps. You may need to be in UEFI boot mode in order for this to work given the file is copied into the /boot/UEFI folder.

    1. First, ensure your laptop is plugged in. The update won’t run without being plugged into the wall.
    2. Next, download the newest firmware from Dell’s site (at the time of this post on 8/13/2016 it was v1.4.4.
    3. Copy firmware to your /boot/EFI folder by using a terminal and running ‘sudo cp ~/Downloads/XPS*.exe /boot/efi’
    4. Reboot your laptop and hit F12 from the Dell splash screen
    5. Select BIOS Flash Update
    6. Click the ‘…’ button and select the XPS*.exe file
    7. Select Begin Flash Update
    8. Enjoy.

    The 1.4.4 firmware lists USB-C fixes, among other issues addressed. Once I installed the firmware I was able to plug my external monitor in and it instantly came up (this did happen many times before, but sometimes I would have to plug the power into the laptop in order for this to work. YMMV but hopefully the newest firmware makes the USB-C rock solid. Dell’s release notes follow:

    Fixes & Enhancements

    1.Improve touchscreen disable feature functionality
    2.Added Support for Pre-OS MAC Address pass-through support for Dell Docks and specific Dell LAN Dongles. Display of MAC Address pass-through value in BIOS Setup.
    3.Improved Type-C device performance and stability
    linux, ssl

    Add Root CA Trust to Linux

    May 7, 2016


    If you are using linux behind a company proxy/firewall odds are you have issues with accessing ssl resources (i.e. https). The company likely has their own Certificate Authority (CA) that issues private certificates. These certificates are not issued from ‘trusted’ authorities (i.e. verisign et al) and therefore the browser does not ‘trust’ them and will respond with a message like this

    In this post I’ll show you how to add your Root CA to the linux certificate store, as well as firefox and chrome.

    The Truth

    The ‘funny’ thing about corporate proxies is they are essentially man-in-the-middle attacks. I’m not a networking professional, but as I understand things the corporate proxy is decrypting your traffic and then re-encrypting with the private cert on the way in. Since your browser trusts the company root CA it doesn’t question the validity of this. Hence, the company gets access to all your ‘encrypted’ traffic to ensure you aren’t sending anything they don’t want you to send (i.e. company IP etc).

    Not judging here, just giving information.

    Identify the Certificate of Interest

    In order to know what cert to add, we have to first locate the cert that is being used at the proxy. To be safe, if you open a browser and look at the certificates you can just export them ALL and then import them all and that will surely catch the cert of interest …. that said this is how you can inspect things

    $ openssl s_client -connect www.wormly.com:443
    verify error:num=20:unable to get local issuer certificate
    verify return:0
    Certificate Chain
    0 ....
    1 HERE

    In the above example where i have HERE is where my cert was that I needed to add to the linux CA store – YMMV but generally the above will show you what cert is being used at the proxy.

    Export the cert(s)

    Now that we know the cert (or just want to grab them all) you can export the certs. I used IE in my example but you could export from any browser that has Root CA trust… in IE go to

    Internet Explorer->Internet Options->Content->Certificates

    Export each cert in X.509 DER format and save to disk.

    Import the certs for command line

    Copy each cert to your linux machine to the /tmp folder and then run the following to convert and stage and load into ca certificate store

    $ sudo openssl x509 -in /tmp/&lt;yourcert&gt;.cer -inform DER -out /tmp/&lt;yourcert&gt;.crt
    $ sudo cp /tmp/*.crt /usr/local/share/ca-certificates/
    $ sudo update-ca-certificates
    Updating certificates in /etc/ssl/certs...
    N added, 0 remove; done.
    Running hooks in /etc/ca-certificates/update.d....
    Adding debian:

    You will see SOMETHING like the above although I’m obviously trying to keep this generic but you should see a positive number for N added.

    At this point linux commandline tools should be aware of the cert and able to now trust the root CA

    $ curl https://www.wormly.com/test_ssl
    html shown

    That’s it! Command-line is done.

    Import the certs for Firefox

    For firefox, you can use the .crt files generated above. Go to

    Firefox->Preferences->Advanced->Certificates->View Certificates->Import

    Import each .crt file in /tmp and approve for websites

    That’s it! Firefox is done.

    Import the certs for Chrome

    Unfortunately chrome is different and we need to export the certs differently. In chrome you can export ALL the certificates at once using PKCS #12 (.PFX).

    In IE, highlight all the certificates you want to export using the cntrl or shift keys to select multiple entries.

    Select a password for these certs, we’ll go with “blah” for our example. Move the resulting .pfx file to your linux machine in /tmp

    Now import into Chrome via Settings->Show Advanced Settings->Manage Certificates->Import. Choose the .pfx from from /tmp/<yourcert>.pfx and enter the password you used (in our case, “blah”).

    That’s it! Chrome is finished.


    Hopefully the above helped you out. I owe a special thanks to madvikinggod from the coding blocks podcast slack channel who gave me some of the openssl commands to inspect what certificate I needed, it was invaluable in moving me forward.

    If you had any issues (or if it worked!), let me know in the comments.

    Good luck!


    Share some Observable love (how share() works in Observable + angular2)

    April 24, 2016


    I’m still relatively new to RxJs (5). They have some good documentation (more in progress) but a lot of it is just experience and understanding the mental model. This blog is about the .share() operator and how it impacts angular2 especially regarding the async pipe. The share() operator allows multiple streams to be watched/subscribed but share the same underlying stream so that you only have 1 execution stack (think http calls over the wire where you may have many consumers but only want a single call to be made for the many consumers).


    First, for common RxJS/stream visuals .. check out RxMarbles. It’s a REALLY good way of visualizing stream operations (which is how I tend to think of Observables … streams to me seems a more natural word for me). That said … no marbles for the share operator….

    ASCII Art (lo-tech marbles)

    Since there’s no fancy RxMarbles, I’ll resort to lo-tech ASCII diagrams for description here (also for a great intro to streams check out this gist). Out of the box, each subscriber will get its own reference when subscribing to a stream

    (stream that has 1, 2, 5 values streaming down)
    Subscriber 1:
    Subscriber 2:

    The above is a basic example where there is 1 stream and 2 subscribers. Each subscriber will get a copy of the stream and process independently. That allows each to act independently and unaware that anyone else is doing anything, including any ‘side effects’ on the stream like map processing or other operations.

    While this is great for keeping consumers consistent and independent, it’s pretty terrible for performance if you are doing things in the stream processing that involve network, heavy latency, or performance intensive computations. This is where .share() comes in. If we take the above diagram and apply .share() to it, it becomes

    (stream that has 1, 2, 5 values streaming down)
                             Subscriber 1:
                             Subscriber 2:

    If instead of numbers being sent on the stream we think of each number as an operation on the stream to OBTAIN the numbers (such as a map, perhaps deriving some deep mathematical computation like prime number generation) it starts to make more sense in understanding why we want to use share().


    I have an angular2 plunker that illustrates things. In the plunker I am making an http call with some latency (anywhere from 1-2seconds). I have 2 calls in my angular2 service

         * Function will return an observable with the data requested. This can be shared across
         * subscribers and will not cause extra http traffic.
        getDataShared(postNum: number): Observable {
            let calls = 0;
            return this.http
                       .get('http://jsonplaceholder.typicode.com/posts/' + postNum)
                         () =&gt; {
                           console.log("side effect on shared");
                          (res) =&gt; {
                            let json = res.json();
                            json.networkCalls = calls;
                            return json;
         * Function will return an observable with the data requested.
         * There is no share operator used here so each subscriber will result in the entire stream firing.
        getDataNotShared(postNum: number): Observable {
            let calls = 0;
            return this.http
                       .get('http://jsonplaceholder.typicode.com/posts/' + postNum)
                         () =&gt; {
                           console.log("side effect on not shared");
                          (res) =&gt; {
                            let json = res.json();
                            json.networkCalls = calls;
                            return json;

    I also have bindings via the async pipe in angular2

     <table class="table">
       <tr><td>UserID:</td><td>{{(dataServiceObservableNotShared | async)?.userId}}</td></tr>
       <tr><td>ID:</td><td>{{(dataServiceObservableNotShared | async)?.id}}</td></tr>
       <tr><td>Title:</td><td>{{(dataServiceObservableNotShared | async)?.title}}</td></tr>
       <tr><td>Body:</td><td>{{(dataServiceObservableNotShared | async)?.body}}</td></tr>
     <table class="table">
       <tr><td>UserID:</td><td>{{(dataServiceObservableShared | async)?.userId}}</td></tr>
       <tr><td>ID:</td><td>{{(dataServiceObservableShared | async)?.id}}</td></tr>
       <tr><td>Title:</td><td>{{(dataServiceObservableShared | async)?.title}}</td></tr>
       <tr><td>Body:</td><td>{{(dataServiceObservableShared | async)?.body}}</td></tr>

    If you run this example and look at the browser console what you’ll see is that for the NON shared stream the stream reaches all the way back to the originator of the data, the http call, for EACH subscriber (the subscriber is the binding in angular2 via the async pipe).

    side effect on not shared
    {"userId":1,"id":3,"title":"ea molestias quasi exercitationem repellat qui ipsa sit aut","body":"et iusto sed quo iure\nvoluptatem occaecati omnis eligendi aut ad\nvoluptatem doloribus vel accusantium quis pariatur\nmolestiae porro eius odio et labore et velit aut","networkCalls":1}
    side effect on not shared
    {"userId":1,"id":3,"title":"ea molestias quasi exercitationem repellat qui ipsa sit aut","body":"et iusto sed quo iure\nvoluptatem occaecati omnis eligendi aut ad\nvoluptatem doloribus vel accusantium quis pariatur\nmolestiae porro eius odio et labore et velit aut","networkCalls":2}
    side effect on not shared
    {"userId":1,"id":3,"title":"ea molestias quasi exercitationem repellat qui ipsa sit aut","body":"et iusto sed quo iure\nvoluptatem occaecati omnis eligendi aut ad\nvoluptatem doloribus vel accusantium quis pariatur\nmolestiae porro eius odio et labore et velit aut","networkCalls":3}
    side effect on not shared
    {"userId":1,"id":3,"title":"ea molestias quasi exercitationem repellat qui ipsa sit aut","body":"et iusto sed quo iure\nvoluptatem occaecati omnis eligendi aut ad\nvoluptatem doloribus vel accusantium quis pariatur\nmolestiae porro eius odio et labore et velit aut","networkCalls":4}

    You can also see that the ‘side effect’ processing is called 4 unique times. While this may be great in some cases, it is clearly NOT good when talking over high latency operations, such as http, or with code that is CPU intensive. What we’d prefer to do is share the stream across all subscribers and re-use the upstream processing – only consume the end result. This is what the .share() operator does.

    side effect on shared
    {"userId":1,"id":3,"title":"ea molestias quasi exercitationem repellat qui ipsa sit aut","body":"et iusto sed quo iure\nvoluptatem occaecati omnis eligendi aut ad\nvoluptatem doloribus vel accusantium quis pariatur\nmolestiae porro eius odio et labore et velit aut","networkCalls":1}

    You can also notice the very real side-effect of the 2 different stream processing types by looking at the latency generated when binding the UI. Since there are 4 different http calls being made in the non-shared example and only a single call in the shared example the UI itself demonstrates how dramatic the binding time is. Check out the below animation

    share operator on stream processing

    You can see I put the post number ‘3’ into the non-shared stream and each area of the UI where I have an async pipe doing the binding is binding individually and slowly. This is because there is a network call taking place for EACH async pipe. Whereas the shared stream makes 1 call and the ENTIRE UI binds in 1 operation afterwards. You can also see the console logs indicate that my ‘side effect’ is taking place multiple times for the non-shared stream whereas for the shared stream you notice 1 ‘side-effect’ taking place.


    The share() operator isn’t applicable in all cases, but for angular2 UI binding when utilizing the async pipe I believe it’ll be used on MOST cases where a single underlying stream is used. ESPECIALLY for angular services that make http calls.

    I hope this helped you understand the share() operator in RxJS and also how it impacts performance for any consumer especially angular2 and the async pipe.

    Thanks for reading!

    hardware, linux

    Ubuntu 16.04 Release + Dell XPS 13 9350

    April 23, 2016

    UPDATE 5/11/2016 – The flicker is apparently Chrome only and has a report to Google. If you use firefox you won’t see this screen flickering. You can also run Chrome via “google-chrome –disable-gpu-driver-bug-workarounds –enable-native-gpu-memory-buffers” and the flickering will stop. You can also edit the desktop file which will allow you to launch the app as you would normally through the launchers – it lives at “/usr/share/applications/google-chrome.desktop”.

    I’ve blogged before about my experiences with the new skylake XPS 13. I’ve been VERY happy with the laptop and wanted to give a status now that 16.04 is released.

    Fn Key Behavior

    All Fn keys work as expected. Notable (?) exceptions seem to be FFWD/RWND and the ‘search’ button (F9). The most important ones for me are display brightness and volume … those work perfectly as well as mute/keyboard brightness and wifi on/off. Interesting wifi on/off (which shares space with the PrtScr (print screen) key seems to favor printing the screen FIRST even though I have the Fn keys turned on. I’m actually not upset about this since I rarely need to disconnect wifi and would maybe use the PrtScr function now and again (normally I just use the Screenshot app so I don’t see myself doing either one honestly).


    Everything is working perfectly here. Plugged in or not it does what I have specified which is when the lid is closed it suspends.

    Battery Life

    I haven’t run any specific battery test with details, based on my experience this things is a champ. I get plenty of battery life on it no matter what I’m doing. On my System76 Galago UltraPro when I have it unplugged I can watch the percent indicator tick down on regular intervals. On this thing each tick is given up begrudgingly. I’d estimate easily 7hrs and maybe as high as 9-10. The ‘pro’ reviews peg the battery life in that range I’m definitely inclined to agree.

    My usage is web browsing and programming with web editors/languages such as Visual Studio Code/Sublime/Angular2/gulp builds/etc. Not super battery hungry applications but the web includes youtube things now and again – which doesn’t seem to make much of a difference.

    External Monitor

    Saving the best for last… I’ve blogged before about the external monitor via USB-C. I bought a USB-C to Display Port adapter from amazon and when I last tried on a 16.04 Beta1 it didn’t work at all. This time as soon as I plugged it in it worked like a champ! Wanted to see how reliable it was so unplugged and plugged in … didn’t work. Again, didn’t work. Unplugged the power from the laptop and tried again .. WORKED!

    So summary here is that it seems to give you an external connection just fine ONCE per power setting. 🙂 This is a completely weird issue … but if plug in your USBC plug and it didn’t work, unplug the laptop (or plug it in depending) and then try again and it should work. This makes me hopeful that it will be resolved in future updates but at least for now there’s a workaround – as goofy as it is.

    Monitor ‘Flicker’

    It was noted in a previous blog by Luis:

    NOTE 4/7/2016: Read the comments for good feedback from others trying. Luis noted: “..before attempting to install one needs to boot and in the BIOS configure SATA-controller to AHCI (or Off).”. He also noted there’s a bug for the screen flicker issue.

    The screen flicker issue still seems to be here occasionally. For now it doesn’t really bother me. It’s not constant and during the entirety of typing this blog in I haven’t seen any flicker – I even went to CNN and scrolled around and played a video and didn’t see it. I think it tends to happen when browsing the web and there’s lots of ads on a page or video going on or when scrolling quickly through the browser. I can’t really establish a pattern because it doesn’t happen frequently enough – just putting it in here so that it’s a known issue albeit an infrequent one.


    It’s a shame the laptop isn’t 100% given the external monitor issues but I’m still absolutely thrilled with it. If I’m not sitting at my desk using my System76 Galago UltraPro, I’m using this thing because I don’t have to sweat battery life and the form factor and performance are killer. The trackpad works perfectly (remember to install libinput!), keyboard is great to type on … it’s a dream to have honestly.


    hardware, linux

    Dell XPS 9350 + Ubuntu 16.04 (beta1, Feb 27th)

    February 27, 2016


    I have been DYING to try Mir every since it was announced. Moving X forward, new libinput integrated for a solid touchpad experience .. the entire thing is very exciting. Since I’m lucky enough to have a System76 Galago Ultrapro AND Dell XPS 13 9350 I figured I’d try to install 16.04beta1 (ubuntu/unity) on my XPS. I figured if it blew the laptop up I have another daily driver I can use.’

    EDIT 2/28/2016 – I have the XPS 9350 1080p i5 no touchscreen model with broadcom wifi.

    NOTE 4/7/2016: Read the comments for good feedback from others trying. Luis noted: “..before attempting to install one needs to boot and in the BIOS configure SATA-controller to AHCI (or Off).”. He also noted there’s a bug for the screen flicker issue.


    I downloaded the nightly iso from here and burned it to a thumb drive with the ‘Startup Disk Creator’ app in Ubuntu. Plugged it into the XPS and rebooted, selected the USB drive and off I went.


    Wifi was detected immediately during the installation, that was a huge sign for me since I have the Broadcom chip in my XPS, which historically has not been supported on linux at all until the 4.3/4.4 kernels. Good news is it’s working like a champ.


    The display looks as good as ever, no issues at all during the installation.

    Special Keys/Touchpad/Keyboard

    Everything “just worked” during installation.

    Installation Wrapup

    Everything worked flawlessly. I even installed using UEFI (when I was running 15.10 I had turned this off in the BIOS and was using ‘legacy’ mode). Now onto the details of how it’s running after installation.



    After installation the first thing I wanted to do was install Mir and test it out. I installed it using the following

    sudo apt-get -y update &amp;&amp; sudo apt-get -y dist-upgrade &amp;&amp; sudo apt-get -y unity8-desktop-session-mir

    On reboot I clicked the little ubuntu icon above and to the right of my username and changed it to the mir ‘8’ (looks like an 8-ball to me). I logged in and … nothing. Dunno what happened, I rebooted the laptop and logged in again and this time it logged in! (…..and looked terrible).

    The resolution was screwed up, I couldn’t launch anything … to me it looked like it thought it was in some sort of phone or tablet mode. I have no idea and I couldn’t adjust anything so I quickly gave up. Oh well, I’ll keep trying as they get closer to release.

    Other Software

    The other things I normally do on a new installation are install chrome, dropbox, libinput.


    You can quickly install chrome using the following

    cd ~/Downloads
    wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb &amp;&amp; sudo dpkg -i google-chrome-stable_current_amd64.deb &amp;&amp; sudo apt-get -f -y install

    I ended up needing the ‘apt-get -f install’ to correct dependencies … seemed to work fine and chrome was immediately available from the launcher.


    You can quickly install dropbox using the following

    cd ~/Downloads
    wget -O - "https://www.dropbox.com/download?plat=lnx.x86_64" | tar xzf -
    sudo dpkg -i dropbox_2015.10.28_amd64.deb &amp;&amp; sudo apt-get -y install python-gtk2 &amp;&amp; sudo apt-get -f -y install


    You can use my steps from my StackOverflow post here. Vote my answer up if you used it. 🙂

    External Monitor

    I was hoping the external monitor worked, and does … sorta.

    I bought this cable before and it worked… and still does. But I could only get the monitor to turn on when the laptop was unplugged from the power supply. And I had to unplug the external monitor cable from the laptop a couple times THEN it would come up. Then I could plug in the laptop to the power supply and everything worked fine (I’m using it now to type in this blog post).

    So I certainly still recommend the cable, just make sure you unplug the power connector from the laptop, then plug the external monitor cable into the USB-C, then once the monitor is on you can plug the power connector back in.

    It’s weird .. but it worked and I’m not complaining too much honestly.

    UPDATE: 4/7/2016 – External monitor no longer works. Hoping by release things are back to working.


    I’d call this a roaring success honestly. With the very notable exception of Mir, everything else on the laptop works 100% out of the box (including Unity 7). Mir will stabilize … but not having to mess around with wifi drivers or skylake processor issues by installing kernels by hand is a WONDERFUL success for 16.04 for me.

    Let me know in the comments if you have good experiences installing 16.04.

    EDIT: Well, after I rebooted I can’t get to a login screen. 🙂 I unplugged external monitor … unplugged power cable .. rebooted a couple times. No dice. THAT’S NOT GOOD!! 🙂 Nice thing is I can get to a terminal windows and it has wifi so I can keep updating with ‘apt-get’ and see how things evolve. So for now don’t upgrade if you want a working laptop … YOU HAVE BEEN WARNED!!

    LAST EDIT! Everything is good. It was the libinput settings file. I adjusted my stackoverflow post to include a couple more lines and things are booting fine.

    linux, ui

    HTML5 and Docker

    February 14, 2016


    If you’ve ever messed with UI programming that utilized the ‘hash’ URL technique for invoking UI actions, it’s time to go html5 native!

    This blog post will describe how to setup an apache docker container with the pieces enabled for URL Rewriting. This allows html5 ‘pushState’ to be utilized instead of hash operations. Some advantages HTML5 pushState offers over hash:

    • Cleaner looking for sharing URLs (http://…/#/employees/list vs http://…/employees/list)
    • Search Engine Optimized (SEO) due to being able to pre-generate/crawl your site and return things server-side whereas hash always needs to process on the client and is not SEO friendly

    The major disadvantage is that it requires the server being involved .. that’s where this post comes in.

    NOTE: For angular2, there is a debate going on for what the default should be, hash or html5. You can read about it here.


    I have a little toy project I have been using for a few years for Spring Boot and angular to help me prove out concepts. I won’t go into any detail on the UI itself (I recently converted from angular1 -> angular2 and that’s what drove this blog post, but it’s applicable for any UI tech that utilizes hash URLs).

    NOTE: All code is in the my git repo.

    To try everything out you’ll need a recent version of docker installed. If you are using an ubuntu/debian based distro you can install it with:

    $ curl -sL https://deb.nodesource.com/setup_5.x | sudo -E bash -
    $ sudo apt-get install -y nodejs

    From here you need a Dockerfile like-so:

    # Pull base apache image
    # ---------------
    FROM httpd:2.4
    # Maintainer
    # ----------
    MAINTAINER Jim Basilio &lt;jim.basilio@gmail.com&gt;
    # Copy file in as daemon user
    USER daemon
    # httpd.conf turns on rewrite module and rewrites 404 errors to load index.html then redirect client
    COPY ./httpd.conf /usr/local/apache2/conf/httpd.conf
    USER root
    # Define default command to start bash. 
    CMD [ "httpd-foreground" ]

    What this does is build a docker image and copy in httpd.conf that turns on the urlrewrite module and configures the rewrites when a 404 is encountered.

    For example, when a user goes to your app at http://awesomeapp.com that allows index.html to load angular (or whatever UI framework you are running) and then angular takes over managing client routing. On a later session when a user ‘direct links’ to your app at http://awesomeapp.com/users/list, this is making a server request to apache (in this case) to serve the ‘users/list’ folder (looking for an index.html normally). However, we don’t have that folder on our server since it really was a UI route so a 404 would be returned from apache.

    The urlrewrite rule is

        Options +FollowSymLinks
        IndexIgnore */*
        RewriteEngine On
        RewriteCond %{REQUEST_FILENAME} !-f
        RewriteCond %{REQUEST_FILENAME} !-d
        RewriteRule (.*) index.html

    I didn’t originally write the above, I took it from this SO gist. When the above is read by apache, any 404 errors that are found will rewrite the URL so index.html is served and then angular will perform the routing it needs. From the users perspective all of this is seamless and it just “does the right thing”. Without this in place, the users direct link would result in a “404 not found” and a very confused user. To start the container you can run:

    $ sudo docker run -d -v <span class="pl-s"><span class="pl-pds">$(</span><span class="pl-c1">pwd</span><span class="pl-pds">)</span></span>:/usr/local/apache2/htdocs --name=html5-apache -p 8080:80 -t html5-apache

    Be sure to customize the ‘$(pwd)’ with whatever the root is for your application.

    That’s it! You now have a docker image you can use for any app that requires html5 pushState routing. This is a good alternative to using ‘lightweight’ dev http servers as seen with angular2 development stacks. You can start the container and let it run, adjusting the shared volume with your source code (i.e. delete the files, change the files, whatever) and when you reload your site everything will instantly update.

    I’m using this setup for my hiit-frontend project which is working great. I can run an ‘npm run build’ and it’ll recompile all my typescript and move my site to the ‘dist’ folder. The dist folder is my shared volume to docker which I just leave running all the time (I intend on writing more about angular2 and the build stack I’m using there in another blog post).

    hardware, linux

    Dell XPS 9350 (4.4.0 kernel)

    January 22, 2016

    The laptop has been running well on 4.4-rc7 but the new 4.4.0 kernel came out and I’d rather be on the newest officially released kernel (until Dell releases the official XPS 13 with an official kernel that is).

    So I grabbed the newest kernel and installed it by running

    cd /tmp
    wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.4-wily/linux-headers-4.4.0-040400_4.4.0-040400.201601101930_all.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.4-wily/linux-headers-4.4.0-040400-generic_4.4.0-040400.201601101930_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.4-wily/linux-image-4.4.0-040400-generic_4.4.0-040400.201601101930_amd64.deb
    sudo dpkg -i linux-headers-4.4*.deb linux-image-4.4*.deb

    Rebooted (remember to hit a key while grub is booting to select the kernel you want). Things are running like a champ so far. Will keep you posted and for anyone that installs please respond in the comments with your experiences.

    Also, I wanted to clean up kernels I had installed (not the 4.2.x one that came with ubuntu though) and ran the following (help from this link)


    (list kernels)
    [jim@xps~]$  dpkg -l | grep linux-image
    ri  linux-image-4.2.0-16-generic                  4.2.0-16.19                                amd64        Linux kernel image for version 4.2.0 on 64 bit x86 SMP
    rc  linux-image-4.3.0-wifitest-custom             4.3.0-wifitest-custom-10.00.Custom         amd64        Linux kernel binary image for version 4.3.0-wifitest-custom
    ii  linux-image-4.4.0-040400-generic              4.4.0-040400.201601101930                  amd64        Linux kernel image for version 4.4.0 on 64 bit x86 SMP
    ii  linux-image-4.4.0-040400rc7-generic           4.4.0-040400rc7.201512272230               amd64        Linux kernel image for version 4.4.0 on 64 bit x86 SMP
    ii  linux-image-extra-4.2.0-16-generic            4.2.0-16.19                                amd64        Linux kernel extra modules for version 4.2.0 on 64 bit x86 SMP
    ii  linux-image-generic                                                 amd64        Generic Linux kernel image
    (remove version i want)
    [jim@xps~]$ sudo dpkg --remove linux-image-4.3.0-wifitest-custom
    (rinse and repeat, i do not recommend deleting the 4.2.x kernel)
    (when you are finished uninstalling kernels run the following)
    [jim@xps~]$ sudo update-grub

    Now that you are done installing the 4.4.0 kernel, make sure you enable libinput 🙂 and if you need to connect to a monitor grab a cable.