Latest Posts

<< >>

Managing for DevOps

This was originally published on the Amalgam Insights site in July of 2018 I am constantly asked the question “What does one have to do to implement DevOps”, or some variant. Most people who ask this question say how they have spent time searching for an answer. The pat answers they encounter typically is either technology based (“buy these products and achieve DevOps magic”) or a management one such as “create a DevOps culture.” Both are vague, flippant, and decidedly unhelpful. My response is twofold. First, technology and tools follow management and culture. Tools do not make culture and a technology solution without management change is a waste. So, change

Why Linux Desktops Haven’t Taken Over the World

KDE Plasma Splash Screen

There’s no doubt that Linux has taken over the datacenter. Walk into any major datacenter in the world and there will be racks of Linux servers with only a handful of Windows Servers. Most cloud services are based on Linux as well. Some big banks still have ancient mainframes but many of those are using Linux. Even Microsoft has embraced Linux! To older technologists that’s like hearing the Pope has embraced Satanism. What began as a hobby more than 25 years ago is now the dominant server operating system. So why then do we see so few Linux desktops? To answer that question, some myths need to be dispensed with

Monitoring Containers: Do you know what happening inside your cluster?

container with a spherical object in it

This was originally published on May 18th on the Amalgam Insights. For reasons I can’t fathom, I forgot to push the publish button.   It’s not news that there is a lot of buzz around containers. As companies begin to widely deploy microservices architectures, containers are the obvious choice with which to implement them. As companies deploy container clusters into production, however, an issue has to be dealt with immediately: container architectures have a lot of moving parts. The whole point of microservices is to break apart monolithic components into smaller services. This means that what was once a big process running on a resource rich server is now multiple

Microsoft Azure Plus Informatica Equals Cloud Convenience

Informatica Logo

This was originally published on June 4, 2018 on the Amalgam Insights site.   Two weeks ago (May 21, 2018), at Informatica World 2018, Informatica announced a new phase in its partnership with Microsoft. Slated for release in the second half of 2018, the two companies announced that Informatica’s Integration Platform as a Service, or IPaaS, would be available on Microsoft Azure as a native service. This is a different arrangement than Informatica has with other cloud vendors such as Google or Amazon AWS. In those cases, Informatica is more of an engineering partner, developing connectors for their on-premises and cloud offerings. Instead, Informatica IPaaS will be available from the

The Abstraction Disconnect is Silly

This blog originally appeared on the Amalgam Insights site on May 8, 2018   Over the past two weeks I’ve been to two conferences that are run by an open source community. The first was the CloudFoundry Summit in Boston followed by KubeCon+CloudNativeCon Europe 2018 in Copenhagen. At both, I found passionate and vibrant communities of sysops, developers, and companies. For those unfamiliar with CloudFoundry and Kubernetes, they are open source technologies that abstract software infrastructure to make it easier for developers and sysops to deliver applications more quickly. Both serve similar communities and have a generally similar goal. There is some overlap – CloudFoundry has its own container and

Managing for DevOps

This was originally published on the Amalgam Insights site in July of 2018

I am constantly asked the question “What does one have to do to implement DevOps”, or some variant. Most people who ask this question say how they have spent time searching for an answer. The pat answers they encounter typically is either technology based (“buy these products and achieve DevOps magic”) or a management one such as “create a DevOps culture.” Both are vague, flippant, and decidedly unhelpful.

My response is twofold. First, technology and tools follow management and culture. Tools do not make culture and a technology solution without management change is a waste. So, change the culture and management first. Unfortunately, that’s the hard part. When companies talk about changing culture for DevOps they often mean implementing multifunction teams or something less than that. Throwing disparate disciplines into an unregulated melting pot doesn’t help. These teams can end up as dysfunctional as with any other management or project structure. Team members will bicker over implementation and try to protect their hard-won territory.

As the old adage goes, “everything old is new again” and so-called DevOps culture is no different. Multi-functional teams are just a flavor of matrix management which has been tried over and over for years. They suffer from the same problems. Team members have to serve two masters and managers act like a group of dogs with one tree among them. Trying to please both the project leader and their functional management creates inherent conflicts.

Another view of creating DevOps culture is, what I think of as, the “CEO Buy-in Approach”. Whenever there is new thinking in IT there always seems to be advocacy for a top-down approach that starts with the CEO or CIO “buying in” to the concept. After that magic happens and everyone holds hands and sings together. Except that they don’t. This approach is heavy handed and an unrealistic view of how companies, especially large companies, operate. If simply ordering people to work well together was all it took, there would be no dysfunctional companies or departments.

A variation on this theme advocates picking a leader (or two if you have two-in-the-box leadership) to make everyone work together happily. Setting aside the fact that finding people with broad enough experience to lead multi-disciplinary teams, this leads to what I have always called “The Product Manager Problem.” The problem that all new product managers face is the realization that they have all the responsibility and none of the power to accomplish their mission. That’s because responsibility for the product concentrates in one person, the product manager, and all other managers can diffuse their responsibility across many products or functions.

Having a single leader responsible for making multi-functional teams work creates a lack of individual accountability. The leader, not the team, is held accountable for the project while the individual team members are still accountable to their own managers. This may work when the managers and project team leaders all have great working relationships. In that case, you don’t need a special DevOps structure. Instead, a model that creates a separate project team leader or leaders enables team dysfunction and the ability to maintain silos through lack of direct accountability. You see this when you have a Scrum Master, Product Owner, or Release Manager who has all the responsibility for a project.

The typical response to this criticism of multi-functional teams (and the no-power Product Manager) is that leaders should be able to influence and cajole the team, despite having no real authority. This is ridiculous and refuses to accept that individual managers and the people that work for them are motivated to maintain their own power. Making the boss look good works well when the boss is signing your evaluation and deciding on your raise. Sure, project and team leaders can be made part of the evaluation process but, really who has the real power here? The functional manager in control of many people and resources or the leader of one small team?

One potential to the DevOps cultural conundrum is collective responsibility. In this scheme, all team members benefit or are hurt by the success of the project. Think of this as the combined arms combat team model. In the Army, an multi-functional combined arms teams are put together for specific missions. The team is held responsible for the overall mission. They are responsible collectively and individually. While the upper echelons hold the combined arms combat team responsible for the mission, the team leader has the ability to hold individuals accountable. Can anyone imagine an Army or Marine leader being let off the hook for mission failure because one of their people didn’t perform? Of course not, but they also have mechanisms for holding individual soldiers accountable for their performance.

In this model, DevOps teams collectively would be held responsible for on-time completion of the entire project as would the entire management chain. Individual team members would have much of their evaluation based on this and the team leader would have the power to remediate nonperformance including remove a team member who is not doing their job (i.e. fire them). They would have to have the ability to train up and fill the role of one type of function with another if a person performing a role wasn’t up to snuff or had to be removed. It would still be up to the “chain of command” to provide a reasonable mission with appropriate resources.

Ultimately, any one in the team could rise up and lead this or another team no matter their specialty. There would be nothing holding back an operations specialist from becoming the Scrum Master. If they could learn the job, they could get it. The very idea of a specialist would lose power, allowing team members to develop talents no matter their job title.

I worked in this model years ago and it was successful and rewarding. Everyone helped everyone else and had a stake in the outcome. People learned each other’s jobs, so they could help out when necessary, learning new skills in the process. It wasn’t called DevOps but it’s how it operated. It’s not a radical idea but there is a hitch – silo managers would either lose power or even cease to exist. There would be no Development Manager or Security Manager. Team members would win, the company would win, but not everyone would feel like this model works for them.

This doesn’t mean that all silos would go away. There will still be operations and security functions that maintain and monitor systems. The security and ops people who work on development projects just wouldn’t report into them. They would only be responsible to the development team but with full power (and resources) to make changes in production systems.

Without collective responsibility, free of influence from functional managers, DevOps teams will never be more that a fresh coat of paint on rotting wood. It will look pretty but underneath, it’s crumbling.

Why Linux Desktops Haven’t Taken Over the World

KDE Plasma Splash Screen

There’s no doubt that Linux has taken over the datacenter. Walk into any major datacenter in the world and there will be racks of Linux servers with only a handful of Windows Servers. Most cloud services are based on Linux as well. Some big banks still have ancient mainframes but many of those are using Linux. Even Microsoft has embraced Linux! To older technologists that’s like hearing the Pope has embraced Satanism. What began as a hobby more than 25 years ago is now the dominant server operating system. So why then do we see so few Linux desktops?

To answer that question, some myths need to be dispensed with immediately. They are:

  • Linux Desktops are Hard to Use. Not at all. Sure, 25 years ago when Linux was a DYI sort of operating system and everything had to be configured by hand it was damn hard to install and maintain. Now? Not so much. Linux desktops have a host of utilities that make installation, maintenance, updates, and acquiring new software easy. Snap and Flatpak are making the bundling and installation of software, well, a snap. Most major distributions* have, for years, come with utilities to find and install third party software in an application store. This was long before the Microsoft Store and Android application stores existed.
  • Linux Desktops Are Primitive and Ugly. Right away, we need to get something out of the way – aesthetics matter. If someone is going to stare at a desktop for hours on end it had better be decent enough to look at. Functionality is important too. A clunky user experience (UX) becomes a drag on productivity over time. This is why Microsoft has put so much effort into the Windows 10 UX and aesthetics over the past few years and why Apple’s macOS is still around at all.
    This criticism of the Linux desktop UX is outdated. Distributions such as Ubuntu by Canonical, Elementary OS, and Linux Mint have complete and rich user experiences. Most use a variation of Gnome or KDE Plasma desktops but, as is the beauty of Linux, they can be replaced with something that suits individual styles. Gnome appeals to a more minimalist approach, while KDE Plasma is fond of Widgets. Elementary OS uses its own variation on Gnome that makes it much more macOS-like. All use modern motifs that are instantly recognizable to the average consumer.
  • Nothing Runs on Linux. Actually, a lot of software runs on Linux, most importantly browsers. As more software is consumed through the browser, it has become the single most important piece of software for any computer to have. The two most common browsers, Mozilla’s Firefox and Google Chrome, run on Linux. In addition, there are many other browsers, such as Midori, that run on Linux including some specialized browsers that only run on Linux. A lot of software common on Windows or macOS desktops is also available for Linux including Spotify and Skype. Most open source software, such as LibreOffice and GIMP, have their roots in Linux and are also cross-platform.

Linux desktops have two obvious advantages. First, they tend to have lower resource needs. There are distributions that can run on computers with as low as 256MB of RAM, although they are very limited in what they can do. Linux can run comfortably on a computer with only 2GB of RAM and a rather tiny hard drive. A Linux desktop can run on a single board computer such as a Raspberry PI. This is why modern Linux desktops are a great way to keep using an old computer that can’t upgrade to new versions Windows anymore.

The second advantage is that it is often free. Most major Linux desktops can be downloaded and installed for free. There are also thousands of useful applications that are equally free. Unlike software from individuals which can get old and stale if the author gets bored or distracted, most free Linux software is supported by open source communities and foundations that work to keep the software fresh and modern.

Given the advantages of Linux and having debunked the myths, here are five reasons why Linux has not taken over the desktop as it has the server:

  • Free Software Comes at A Price – Support Is Not Included. Yes, there is support from the “community” but that is different than having someone to call and get guided assistance. There are paid support plans for many distributions and, while it’s still not very expensive for commercial users, paid support is relatively expensive for consumers. This is especially the case when a paid operating system, such as Windows or macOS, comes pre-installed on a computer and includes support.
  • Old Software. Everyone has legacy software. For a consumer, it might mean an old game that they love or that greeting card builder from 2001. Companies have lots of homegrown or purchased software that only run on Windows or macOS. In either case, this software is too expensive or difficult to replace even if it is only used occasionally.
  • Inertia. Whether it’s companies or individuals, it’s often easier to stick with the familiar. Investments in knowledge and support, not to mention software, are preserved. This dynamic can change when the familiar OS changes radically, such as the case of moving from Windows 7 to Window 8.
  • It’s Not What Is Used at Work. What is used at work often dictates what happens in the home. Early in personal computing, Microsoft was able to get a foothold in the workplace while Apple was looking to win over the consumer. Look how that turned out. It’s much easier to know one operating system for both work and home.
  • Microsoft Office. All of the other reasons for the lack of traction of Linux desktops can be overcome in a number of ways. One can use a switch to Linux as an opportunity to upgrade other software, learn new things, or come to the realization what home and work are different spheres of life. The one thing that Linux desktops cannot overcome on its own is that Microsoft Office for Linux simply doesn’t exist and everyone uses Microsoft Office. The open source community can push LibreOffice or any other alternatives until the Sun burns out, but it won’t change the fact that the mass of companies and individual consumers use Microsoft Office. The browser version of Office is okay, but everyone needs a desktop version either for the features or because they don’t have a decent Internet connection. Using a Linux desktop laptop on an airplane means not using Microsoft Office and having to rely on software that has a different user experience and imperfect compatibility.

There will always be individuals and companies that will adopt Linux desktops for philosophical or cost reasons. Linux is great for reviving an old computer that would otherwise be useless. It is also possible to use only Free and Open Source (FOSS) software with a Linux computer, which some people value. The same cannot be said for Microsoft Windows or macOS from Apple. Developers also adopt Linux desktops since they sync up well with the server environments they are working with. The masses of computer users, on the other hand, are unlikely to switch until Microsoft Office is available for Linux and there is a decent Windows compatibility layer. Until then Windows owns the desktop, with macOs the alternative for Microsoft haters.

* A Linux Distribution is a bundle of software that runs on top of the basic Linux system. desktop distributions include a desktop environment and a series of free applications including a browser and usually the LibreOffice office productivity suite.