Interesting Technologies in 2016

Staying on top of changes in technology is a nearly impossible task. I suspect this is why software engineers are so dogmatic about the tools they’ve chosen: after investing such a great deal of time and effort in learning them, new tools simply make them afraid of the unknown. Regardless, staying current with evolving technological changes is part of the job description.

I didn’t get to toy around with as many new technologies in 2015 than I would have liked. I’m hoping 2016 will be different. Here are some things that have my attention.

1) Elixir and Phoenix

Extreme reliability and scalability are a necessity in large scale software environments today and the foundations of Elixir and Phoenix, the Erlang VM, promise this.

Elixir is a relatively new language based on Ruby and built on the Erlang VM (think what Scala or Clojure are to the Java VM). Built on it is the Phoenix Framework, a web framework that promises “speed and maintainability”.

If you haven’t yet, check out the blog post detailing how they built a basic chat client that easily supported 2 million concurrent connections.

Everything about this technology looks exciting. Erlang is known for being rock solid, and Elixir looks like a fun, small, and powerful language. The combination of Elixir and Phoenix are what I’m most interested in 2016.

2) vue.js

I prefer to stay away from front end engineering. However, as a web developer, and with the meteoric rise of JavaScript, this is nearly impossible.

vue.js is a new library (framework?) for making reactive components for the web.

The “10 second example” on their page is enough to get me interested. Because I primarily prefer to write server or backend software, the extent of my JavaScript and front end knowledge ends at including jQuery with a <script> tag and calling it a day.

That isn’t sustainable anymore, and investigating vue.js looks like a great way to get on top of building reactive web applications.

3) Postgres 9.5 and 9.6

Postgres is by no means a new technology, but it is simply the best open source database in the world, period. From supporting JSON natively, to recursive common table expressions (CTEs), to it’s rock solid stability and reliability, it simply can’t be beat.

2016 will bring us Postgres 9.5 and possibly 9.6. With 9.5, we finally get the UPSERT (called ON CONFLICT in Postgres) to handle an INSERT or UPDATE depending on the presence of a primary key. Postgres was long criticized for not having this feature. However, the Postgres developers preferred to wait to get it right rather than just slap together a feature to stay competitive and call it a day. I am interested to see how ORM developers will integrate this feature into their frameworks.

I am really excited to see what Postgres has to offer in 2016. If you’re starting a new project, I highly recommend you start with Postgres as your relational database.

4) Rust

Awful name aside, Rust looks like a very interesting language. Compiled, server side, multi-core, statically typed languages have really taken off recently (Scala, Java, Go, etc), and I suspect the same will happen for Rust.

It promises thread safety, reliability, and speed. The development of the language has happened at a breakneck pace, and it offers a great view for anyone who wants to see how a language is designed.

Admittedly, I will most likely not have time to use Rust much in 2016, but I suspect it’s impact will be huge.

5) Decentralized User Management

Savvy Internet users want a single decentralized user management system. OpenID and eventually Mozilla Persona were going to be it, but unfortunately both have languished.

As much as I love 1Password, I hate that I have a separate username and password for every site that I visit.

Though there are many for-profit user management systems out there, I would love to see a successful open source (or rather, open protocol) one implemented and sustained.

Ironically enough, the sites that need secure, hardened, and peer reviewed user management (banks, investment firms, hospitals, and government agencies) are the ones least likely to implement it.

6) Back to the Basics

The Web has changed dramatically since it first premiered in 1991, and yet it is still the same. The very first web page still renders fine in all modern browsers, as do millions of other web pages built before 2000.

Building applications for the web has gotten increasingly complicated since 2012. From single page apps using complex JavaScript frameworks to unmaintainable monolithic nightmares, to deploying hundreds of microservices, building for the web has never been more complex.

I suspect developers are going to begin to reject these architectural changes and return to building simple, well tested, easily maintained, and simple applications: a single application with smart use of interactive (or reactive) components with JavaScript combined with a well tested and vetted backend framework.

I’ve found that combination makes for an amazing experience, both for the developer and end user.

2015 In Review

2015 was a rough year for me (and my family). I’m glad it’s over and I’m excited about what 2016 will offer.

In early February, my grandmother on my dad’s side, Margaret Cherubini, passed away. Her death was a great loss to me. I knew her health was declining, but you never can fully expect to get The Call about her passing. She lived an amazing life of 92 years in Canarsie in Brooklyn, New York, raised my father and uncle, and personified living life to it’s fullest. I’ll always remember visiting her over the summer, listening to her stories of growing up in a family of eleven children, and enjoying her excellent cooking.

Two weeks later, my third (and final) child, Quentin, was born 11 weeks early. Quentin had a condition named IUGR, or intrauterine growth restriction, while in the womb and was very small for his gestational age. Most 29 week babies are in the 4lb range, but Quentin was 2lbs 6oz when born, and quickly dropped to 2lbs. The next four months went by in a blur: my wife Ashley would spend the days at the hospital while I was at work, and I’d head up there after dinner till about 1 or 2am. Unquestionably we were able to get through this ordeal with the love and help from our families. We’re extremely lucky to live close to Ashley’s side of the family who can help out in a moments notice.

Quentin was in the hospital for four months. Before he left, he had surgery to implant a g-tube because he had trouble eating. A g-tube is a small button that allows for direct feeding into your stomach. This allowed him to get nutrition while learning how to eat orally. The g-tube is finally coming out on January 4th.

Unfortunately, Quentin came down with pneumonia in December and was re-admitted to the hospital twice. We were finally released on Christmas day. We’re hoping this is our final hospital stay.

My wife and I are incredibly thankful we live in a large metropolitan area with one of the best neonatal intensive care units (NICU) in the nation. The stories of other families traveling hours to spend time with their premature children was harrowing. The staff of nurses, doctors, respiratory therapists, and hospital support staff at our hospital (Presbyterian Dallas) were second to none, and I look forward to seeing them at the yearly anniversary party the nursing team throws. No doubt the entire ordeal was the most stressful time of our lives, but it solidified our marriage and family.

I have three sons, Nicholas (5), Miles (2), and Quentin (10 months), so things are frequently hectic, stressful, and loud. I love them all so much and I’m looking forward to see how they grow in the following year.

My side business, Bright March, did well in 2015, and we’re expected to do a lot better in 2016. Our latest app, Rate My Tech, is doing very well and we have lots of opportunities for growth with it in 2016. We’re also undergoing a re-brand which is exciting. April 2016 will also be the fifth year anniversary of starting Bright March, and I’m incredibly proud of the work we’ve done in that time.

I decided to make several changes towards the end of 2015. I’m reading more (I’ve finished six books since October), trying to write more, and consistently exercising. Before Quentin’s latest hospital stay, I was down nearly 20lbs. Right now my schedule allows me about an hour of exercise every night after the boys are asleep. At some point I’d like to step back into the gym on a regular basis but for now I’ll take what I can get.

I did not get to learn as many new technologies (mostly new programming languages) in 2015 than I originally wanted. I’m still interested in picking up Go and Rust, but for now I primarily spend my time in PHP with Symfony. The PHP ecosystem has grown dramatically in the last two years, and the release of PHP7 makes it an incredibly powerful language. It still has it’s warts, of course, but those who originally dismissed it years ago (probably rightfully so) should take a look at how the language has matured.

2016 will be the year we cement our roots in Dallas. We’ll be purchasing a new home as we’ve outgrown our current one.

People are usually optimistic about the new year, and I’d be lying if I said I wasn’t. 2015 was an exhausting year, and I’m glad it’s concluding. I’m excited about 2016, and I hope you are as well.

Book Review: Influx

I have been a fan of Daniel Suarez since I first picked up a copy of Daemon at a local Half Price Books. Daemon still stands as his best work, but his latest book, Influx, is very close to it.

Influx is about a secret government agency that controls emerging technologies that could fundamentally change the way we live: new materials, advancements in cancer and disease fighting, new computational methods, and changes to our understanding of physics. The agency, named the Bureau of Technology Control, or BTC, kidnaps the scientists who made these discoveries and forces them to work for the BTC itself. If the scientist refuses, they are banished to a secret prison named Hibernity where they are tortured for the rest of their lives.

The main protagonist, a physicist named Jon Grady, manages to escape Hibernity and has tasked himself with destroying the BTC. The remainder of Influx is spent following him on this mission.

The main thesis behind Influx, “Are we advancing technology too quickly, and what effect does that advancement have on our society?” is a strong one. Like a lot of sci-fi books, the genesis of the main villain, the BTC in this case (and it’s director), may have originally been pure. Perhaps we weren’t able to handle the speed at which technology flourishes. However, at some point the BTC became greedy, corrupt, power-hungry, and their initial intentions changed drastically.

If you’re a fan of Suarez, you can probably imagine the answer to his thesis, so I won’t bother writing it. The story is very well written, and you can tell Suarez has matured as a writer. The main villain was especially evil, and you really ended up hating him by the end of the book. The technology in the book was very well researched (even if nano-technology was a tad overused). I specifically enjoyed that there was very little hamfisted love story in Influx. The love story in his previous novel, Kill Decision, felt forced and out of place.

I enjoyed Influx much more than Kill Decision, but it has yet to live up to Daemon. You don’t have to be familiar with Suarez’s previous work to enjoy Influx, and any sci-fi fan is guaranteed to enjoy it.

Unit Testing Your Service Layer is a Waste of Time

Writing unit tests for your application’s service layer is a waste of your time and won’t strengthen your application any better than functional tests would. Writing functional tests that thoroughly test your application does what it says it will do correctly is a much better use of your resources.

I first learned about test driven development (TDD) and automated testing in 2009 and was blown away. I immediately purchased the book xUnit Test Patterns and devoured it. I learned about code smells and fixtures and stubs and mocks and everything in-between. It was amazing that I could actually verify that my application works like I intended it to. Throw TDD in the mix where I could guarantee that I only write as much code as necessary to pass my tests and I was hooked.

I immediately took an academic approach to testing. Invert every components control, mock every depended on component (DoC), completely isolate and unit test everything: the database, other objects, even the file system.

For trivial applications like a project hacked together on a weekend, this practice was fine. I felt like I spent an equal amount of time writing tests as I did writing the code itself.

Fast forward a few years and I have a successful consultancy and am working full time writing web and network applications. I’m still trying to write unit tests and follow TDD principles, but it just isn’t working.

For example, I would write both unit and functional tests for my application’s controllers. For a Symfony application, this is a non-trivial task as the number of depended on components in each controller can easily force your tests to be longer than the controller code itself. Breaking controllers into very small units is difficult as they often deal with handling UI interactions and must communicate to multiple components of the application.

Application Development Has Evolved

When TDD and automated testing became integrated with object oriented programming, databases were big behemoths that required a full time team to manage. File systems were very slow and not reliable. It made sense that your tests should be comprised of small units that were tested in isolation: your components were unreliable!

Today, enterprise applications are contained in a single repository. You can mirror a production server on your insanely powerful laptop with a single Vagrantfile. Working with databases in your tests is a cinch (especially with tools to build out your fixtures in YAML). In all but a few instances, you generally don’t need to worry about a network device or disk drive failing during test suite execution. And modern frameworks employ multiple environments with a dependency container that makes it very simple to mock complex components in your test suite.

Application development has evolved, and the time spent writing unit tests that mock every component in your system does not offer the same dividends as writing a simple functional or integration test does.

Motivating Example

Lets look at a small example of how writing a simple functional test can rapidly increase development and still provide the same guarantees your code works.

If you recall from my previous article on using the ORM, I am in the process of building an importer for an online store. I have the code written, and now I want to verify it is correct (I’ve thrown any notion of TDD out of the window).

In the first example, all depended on components of the system under test (SUT) are mocked.

View ProductImporter Test Suite With Mocks

The first example is academic; it’s pure. It proves my code works, it’s fast, and it tests individual units. The second example is functional. I’m taking actual files, running them through my code, and seeing how the code responds.

View ProductImporter Test Suite With Data Provider

Another benefit of the second example is that the data provider can grow over time. Client deliver a malformed file and you want to see how the code responds? Throw it in the provider. Client thinks there’s an issue with your importer because their file matches your spec? Throw it in the provider. It makes finding actual bugs that a unit test would completely ignore.

When to Write Unit Tests

There are, of course, times when writing unit tests is necessary and writing functional tests may be impossible.

If you are writing a library or framework, it is wise to write unit tests. You don’t know how your code will actually be used, so having formal verification that your library classes do what they say they will is a must. It also adds a great deal of built in documentation to your codebase because a passing test, by definition, accurately documents your code.

Another time to write unit tests would be for a legacy codebase right before a refactoring. If the codebase is old enough, your DoC’s may be very difficult to work with and thus writing a unit test will accurately capture the functionality of your application.

From the moment I started writing tests, I’ve attempted to get every developer at every company I’ve worked for to start writing tests without much luck. In retrospect, I feel if I had started with a “functional first” approach, I would have been more successful in my efforts. Introducing a developer to testing by way of writing simple functional tests may be the best bet to get all developers writing tests, period.

Relaxing Hacking

I love music and listen to a lot of it when I’m programming. While I still love listening to heavy metal from time to time, I’ve switched to indie and easy-electronica genres. It helps me concentrate better, I enjoy the electronic sounds, and the vocals are usually uplifting.

Spotify makes it incredibly easy to create a playlist of your favorite music. I’ve created a public playlist named “Relaxing Hacking” that I listen to when writing or programming. If you’re into that type of music as well, I suggest you check out the playlist. Artists include:

  • Brothertiger
  • Phantogram
  • Washed Out
  • Youth Lagoon
  • Geographer
  • CHVRCHES
  • Future Islands
  • Glass Animals
  • Chet Faker

I Am a Great Programmer, But a Horrible Algorithmist

This is an old post written in February 2013 and published my old blog. I am publishing it here because it resonated with the community when it was first posted. The basic idea continues to flourish with the rise of sites like rejected.us and Max Howell’s very popular tweet.

I am a great programmer, but a horrible algorithmist. It is a thought that has been weighing on me heavily recently, and I’d like to gather other developers feelings on the subject as well.

I started what can be called my professional development career back in 1999. I was still in middle school, but my father hired me at his software company. My official duty was to make updates to our websites, but I mostly ended up bugging the other developers to help me learn.

From there I picked up Perl (somewhat) and then moved to PHP and front end web development where I have stayed comfortably for the last twelve years.

When it comes to building large scale systems, understanding the details of those systems, and actually writing them, I do very well. I can write elegant PHP code (believe it exists), and really understand programming well. I do all the things a software craftsman does: writes tests, automates as much as possible, learns new technologies, hones my craft with side work and open source work, and build systems that will scale with demand and customer requests.

I even have a degree in Computer Science from what I think is a great university.

However, I feel I am a horrible algorithmist.

Ask me to write a complex algorithm (even one that has been discovered), and I start to get sweaty palmed and nervous. Is this a symptom you have as well? To really be able to express an algorithm in code, I really have to spend a lot of time understanding it to do so.

I understand that an algorithm is just a series of steps to complete a problem. I am referring to complex algorithms like sorting, recursive merging strategies, cryptography, and compression, to name a few.

My proudest college accomplishment was writing the A* algorithm for my first Data Structures and Algorithms class. I spent hours physically drawing graphs and keeping written tables of the heap that the nodes were being pushed onto and off of.

I even kept the drawings because I was so proud of them (click the links below to see the sketches).

A* Sketch #1
A* Sketch #2
A* Sketch #3
A* Sketch #4
A* Sketch #5

What it boils down to is I often have trouble seeing the underlying algorithm to a complex problem. I once interviewed with Amazon and did not make it past the second round because I could not see the underlying algorithm in one of the questions they asked me (the questions on overall architecture, however, I aced just fine). Fortunately, this is not something you either have or do not. Some programmers do have a natural ability to see the underlying algorithm to a problem, but if you can not, it can be learned.

Am I alone in feeling this? Do other programmers struggle with this as well? Is this a manifestation of Imposter Syndrome? I thoroughly enjoyed college, but I did not study as hard as I should have. If you are a Computer Science major in college now and a lot of this does not come naturally, I urge you: please spend time on your studies. Really learn the algorithms presented in class. If you never actually use them during your career, at least it will help you feel more like a programmer.

Application Scalability Has Evolved

This is an old post written in November 2012 and published my old blog. I am publishing it here because I believe most of the thoughts presented have become true. We have Vagrant and Docker for fast onboarding, modern frameworks allow applications to be built very rapidly, the rise of continuous integration tools proves rapid release cycles and automated testing are popular, and the domination of Slack shows communication is as important as ever.

Application scalability has evolved. I think a lot of technologists, from developers to operations, believe that application scaling means to simply add more servers behind a load balancer and call it a day. That might have been the case when writing web applications was a new medium of software development in the late 1990s and early 2000s, but the last several years have completely changed what it means to write a scalable application.

Modern application scalability is all about speed. With as competitive as writing software is today, speed is not only a feature, but a complete game changer.

On Boarding New Developers

Writing a scalable application means you need to be able to allow new developers to work on it as quickly as possible. The delta between when a developer joins your organization or project to when they can begin hacking on it should be measured in minutes, not weeks, days, or even hours.

Have you ever started at a new company and not felt useful on the project you are working on until several weeks into working on it? I have. That application is not scalable. Ideally you should be able to arrive at work on your first day, open your new computer, clone the project you are working on, adjust some build settings, and immediately build, test, and deploy the project (locally).

A scalable application allows developers to work on their own local machines, running their own local databases, services, cron jobs, and other necessary applications. A scalable application allows developers to work entirely in their own sandbox.

Application Building

The ease at which you can build a new version of your application determines how scalable it is. A slow build time means you can not respond well to increased load. Large applications (500,000 LOC or larger) should take several minutes to build. Smaller applications should take several seconds.

If you are constantly fighting the build process, and are afraid of deployments, your application is not scalable. Take the time to reduce the delta from when a change is made to when it is pushed to production. It is worth spending the time to build a configuration system so you can turn on and off features in production. This way, you can build out features, push them into production, and turn them on and off as needed. This is a powerful mechanism to reduce the number of errors you have after releasing code that has been under development for several months.

Release Cycles

A scalable application has rapid release cycles. Rapid release cycles find bugs quickly and respond to new requirements faster. As soon as your code is written, has passed all of its tests, and been reviewed, push it to production. Having tools already in place to monitor failures makes it easy to determine how your code is operating in production. Your goal should be to release more high quality production code as fast as possible.

We have better tools, development practices, and hardware now to continue allowing developers build in a silo for three months and then spend four sleepless days releasing their code.

Communication Styles

Teams must communicate well to build a scalable product. This includes non-development teams as well. Inefficient communication leads to poor product development. We all have been members of email threads longer than a book. Information is lost or disguised, and building a product that scales to your customers demands becomes more and more difficult.

Communication needs to be efficient to build a scalable product. Developers need to be able to communicate clearly and efficiently between themselves. That means quiet working conditions and the best tools possible. A developer can hopefully express herself most efficiently with code, so having good code review tools and practices in place is a must. Code reviews primary purpose is not to catch bugs, it is to spread knowledge.

How quickly can sales managers communicate to your customers about production ready features? The faster you can onboard new customers to your product will help you build a scalable platform. You will find your pain points quicker and patch them faster. The Marketing and Development departments should work hand in hand. Everyone at the company is responsible for marketing the product, and everything they do markets the product.

Test Coverage

A scalable software platform has great test coverage. With a great suite of tests, developers can easily mimic different load scenarios with the product without having to release it into production first. They will immediately know if their changes affect the speed of their product negatively.

Customers pay your company for quality work. Releasing software without tests is not releasing quality work.

Spend the (short) amount of time it takes to have your developers learn testing. Spend the (short) amount of time it takes to write tests for your application. It will pay off dividends in the end.

Ability to Change

Application scalability boils down to one thing: the ability for a team to change rapidly. Each of the sections above relates to rapid change. All of it can seem overwhemling if you are used to a slower development pace. As you add more of the above suggestions into your development flow, you will notice a snowball effect.

Each new change will piggy back on top of the previous one and the delta of time to get it successfully integrated into your team will be smaller than the previous change. For example, getting your developers to write tests may take two weeks. But as soon as they are writing tests, their code reviews will increase in quality because they are reviewing the code and test changes. Suddenly, bugs that were not clear earlier scream out of the screen. Knowledge will spread quicker, and developers will on-boarded faster. Releases will take seconds rather than hours. Communication between developers and sales managers will increase because developers will be excited to show off their new working features.

Application scalability has evolved. To be a successful software engineer, you must evolve with it.