warpedvisions.org

ESSAYS, HOW-TO ARTICLES, AND GENERAL MAKERY BY BRUCE ALDERSON.

I had a realization last week:

How can it be more me? If I'm not in it, can I get into it?

I'm an idea guy. It's why I love designing software, both in terms of system design and user experience. I love designing and developing products too. It's something that can get me fired up, keep me from sleeping, and keep me motivated through even the darkest, rainiest days.

It's easy to conflate the love of designing things and your favourite ideas with projects you should actually work on yourself. Not every idea is worth doing. Not every idea is suitable for a given company, team, or even an individual to do. The thing has to fit the people making it somehow. Ideally the ideas and designs embody something that is at the root of those people, as the result will be something that is much more than the sum of its parts.

So I had one of those ideas

Like many people, I use task lists to organize the projects I work on. I write the tasks on a large piece of paper, organizing them in lists and columns around the sheet. In the center of the page, I summarize the high points of all the lists into a single focus list, which I separate visually from my others on the page. This list has very specific tasks and goals that serve the various projects and sub-projects on the page. Often finishing the focus list is an incredibly successful sprint of work towards some larger goal.

One of many focus list wireframes from 2016

I've been thinking about building a tool for my focus lists for a few years now. I've used the project as a way to practice sketching various UI designs, looking for balances between skeuomorphic, spacial, and domain-centric ways of thinking about the problem. As a practice project it's been fun.

The other day I read that Microsoft planned a shutdown of a recent acquisition, Wunderlist. I wondered if I should dust off my focus list designs; I could easily commercialize the idea and build it myself. The problem was that I didn't bother to stop and ask myself if it was something I should do. I didn't wonder if it fit who I am.

When I dug into the question a bit, I found a list of reasons why I wanted to build a task-list application:

  • My idea of focus lists is pretty great
  • I know how to design and build web things
  • I have a bunch of designs that I'd love to build

Most of my reasons to build a TODO tool are all pretty shallow.

If I ask myself harder questions, the project seems way less obvious:

  • Does the project fit who I am?
  • Are task lists something I get excited about?
  • Should I really build a task app?

I had to admit to myself that I'm not passionate about productivity tools. I use them and I need them, but I don't talk or write about them, as they're more of a necessary evil than something that excites me.

Looking more objectively, a task-list project just doesn't fit me. It isn't very Bruce. I love the idea. I'm proud of my clever design and approach. But the thing itself isn't something that lights a fire under me. It doesn't get me excited for the thing, rather I'm more excited about the hubris of it.

From point A to point B

To figure out if a project is really me, I think I have to do a few things. First, I have to imagine that it's the only thing I'm working on. Am I still excited a month from now? Am I talking about it? Am I willing to do the boring work to finish it?

I also have to imagine if it's something that fits what I've done already, at least the stuff I'm still excited about. It needs to fit where I'm going too. If I draw a line through those things and extrapolate, do I like where it points? Is the thing it points to a better version of me?

So I now have a new question for my side projects: is this project really me? Is it a very Bruce thing? I have lots of project ideas that are totally me. I think I'll focus on those.

#Weblog

There's a lot to like about our beloved task management tools. But if we're honest, there's a lot they get wrong too. Here are a few ways TODO tools grind my gears:

  1. Every task and list looks exactly the same.
  2. Lists are organized with limited hierarchy and pre-requisites.
  3. Lists are organized with no real spacial control. (Kanban boards are a great example of how spatial organization can vastly improve how we think about certain types of lists.)
  4. Tasks lists are either a huge a black hole or they're a cacophony of noisy craziness.
  5. Task tools can be too atomic. A task is an indivisible line item, in single list. Grouping and splitting tasks is often manual and way too much work.
  6. Importance is missing from most list tools; everything has the same weight; there are only minor visual differences between lists and items.
  7. TODO lists require careful weeding, and they accumulate cruft really easily.
  8. It's can be difficult to work a list; one task is usually a dozen and not all completed tasks are interesting.
  9. Task lists don't care where you're at personally, they just sit and stare at you. I swear my lists judge me some days.
  10. They don't play nicely with your schedule. Not everything on your lists needs to hit your calendar, but the stuff that does really does.
  11. Completing a task is more than checking it off. Tasks have more exit states than done, anywhere from “Wow, that was a bad idea” to “This doesn't matter anymore because of such-and-such”.
  12. Huge lists are super demotivating.
  13. Huge sets of completed tasks don't have the positive effect you wish they would.
  14. Some tasks are time sensitive, which is more than just when it's due.
  15. Some tasks are recurring, but not in the it's-due-every-wednesday sort of way. Meaningful tasks like “You should really filter through the bug list this month” need to happen regularly, but not monthly on Tuesday mornings.
  16. Some tasks are just plain different than others. I'm looking at you, “Find inspiration today”.
  17. I finish certain types of tasks better at certain times of day, or when I'm in certain moods.
  18. And finally, task lists are very rarely inspiring. I've never looked at my TODOs and had an epiphany.

#Weblog

I don’t like the term technical debt. We mostly use it without thinking, and it’s often the wrong way to frame the value of our software designs. Most of the time we’re being dishonest when we call our decisions debt, and unless we're planning out the general long term costs of our approach, it's not debt at all.

With financial debt, money is mostly money, regardless of the source. The terms of a loan are known up front when we borrow money, and those terms may or may not be satisfactory. Any decision to borrow is based on a known commodity, and on the need and the terms, both compared with the gains. Taking on debt, at least in business, is an informed and unsurprising proposition. Debt is often a rational choice for a business too, if the risk is low and the return is favourable. It’s rare for a company’s investors to allow a business to take on financial debt that isn’t clearly understood.

With technical debt, both the commodity and the terms can be fuzzy. What’s being borrowed isn’t always known, and the risk/reward isn’t always easy to measure. If technical decisions are a commodity, they can be a volatile one. If the cost of those decisions are the terms, then they’re not always agreed upon ahead of time. Taking on any debt without knowing both the values of the commodities and the cost of the terms makes it impossible to think about the proposition clearly.

At best, we treat software technical debt like consumer debt, where we blissfully ignore the commodity and the terms of our choices, focusing only on our immediate need. At worst, we label our poor technical decisions debt (especially our predecessor’s). It’s a lazy phrase, a cop-out, and is a costly way of doing business.

There is a place for actual technical debt in software projects, but it needs to be informed and planned. For example, it may not make sense to invest in a fully scaled system before proving out a concept. Starting down a simpler path can be the better choice, as you can reduce the overall risk with a moderately increased cost. That simpler path has a future cost, of course, but it can be a rational decision to defer the full cost of the solution until you know more about the features and fit to the market. Taking a split risk in approach can be good for the business, but the risk and terms need to be part of your planning.

Technical debt that rears its head unexpectedly, on the other hand, isn’t debt, it’s regret. We often regret our past decisions, as they can be very costly and inconvenient. But if we treat our remorse as debt, then we’re admitting that we’re not really suited to making technical decisions for our business. We’re missing the value of what technical debt can be, which is a predictable stepping stone to future growth.

We need to be honest about our past technical choices. Debt is something that we plan for, that has a known future cost. Regret is something that we’re remorseful for, but represents a historic lack of planning and unquantified risk. We should be taking risks in the business of software, but they should be sincere and measured actions, otherwise we’re fooling ourselves into thinking that we’re being rational in our business.

Last week I was talking about how it’s easy to conflate debt and regret when it comes to technical decisions. Technical debt is the set of simple, shorter paths in software development that you follow intentionally. Regret is more about getting lost and following unsafe paths, often blissfully unaware that you're lost. Technical debt will feel good in the long run, as it helps you get somewhere faster at a reasonable cost. Regret, on the other hand, feels bad, as you can see the wasted time and effort spent on a path that was clearly followed by mistake. It's easy to feel unqualified to measure technical decisions, especially if you're not technical. You may be disconnected from the planning process or you may not understand the jargon and details of an approach. How can you ask intelligent questions about risk when you feel so separated from what's happening? How can you make clear decisions about risks with incomplete technical knowledge? Luckily, most regrettable technical decisions fail to satisfy even the most basic of principles, and risky debt is all about the poor ratios of cost versus gain.

Know the scale of your product and features

In terms of debt and regret, you can measure the risks by understanding two things:

  • the basic scale of your product, and
  • the basic dependencies in the features your product offers.

Understanding these doesn't require much technical knowledge, and that understanding is anchored to what your product does. Thinking of risk starting with facts and how they relate to what your product provides you a rational base to start from. Scale includes things like the number of users, the size of the things they do, and how fast they need to do them. When you have a lot of something, you can ask questions around your features and those abundant things: how can we report on X with Y users? Can we also do Z? Dependencies are simply how the features and pieces of your product relate, as there will be certain features that are more important to your product than others. Those features are riskier, as your system wouldn't be viable without them, and other features may not be possible without them. Our product has to do X to do Y and Z; what if X is too slow? What if we can't do Y? Bigger, more fundamental things are obviously more important. Understanding the scale and dependent risks gives you a set of facts you can use to anchor your thinking. The truths don't change as you develop your software either, unless you change the focus of the product.

Know which risks to focus on

To simplify thinking about risk, you can place it on a gradient of principles:

  1. Decisions that are never questioned or justified represent the risk of the unknown and unseen.
  2. Decisions that follow known bad paths represent the risk of the known bad.
  3. Decisions that don't follow known good paths represent the risk of the likely bad.
  4. Decisions that follow new paths represent risk, that of the unknown.
  5. Decisions about the most fundamental parts of your system are risks of increased or root dependency.
  6. Decisions about the biggest parts of your system are risks of scale.
  7. Decisions that defer costs are a risk of future expense.
  8. Decisions that prevent key opportunity are a risk of reduced momentum.

When you think about risks in terms of principles, you can separate some of the technical from the rationalization. When a team wants to build a custom framework (a classic example), it's easy for a non-technical manager to see that the approach isn't a known good, and that it's a new path. These principles don't prevent following that path, but they do make it clear that the risks and likely costs are not insignificant.

Know when to consider risks

Considering risks is a crucial part of a healthy software development process. Knowing the size and dependencies of your product gives you a place to anchor your thinking. Identifying the bigger, more fundamental issues in your product using principles helps you identify the most basic major risks, as well as giving you a way to describe the risks themselves. But, do you consider every risk? Do you weigh every change? Most organizations can be improved by considering the risks of only a small portion of their technical decisions. Anything foundational or large should be considered carefully, and anything that violates one ore more of your team's core principles should be actively avoided.

Know your history

Finally, another great way to identify risks is to learn more about the history of software failures. The principles that identify risk are clear throughout the history of failed projects, and the ways that the risks remained hidden help to identify future failures.

The New Year came and went without much of a fuss. I read about the 2016 Maker Challenge shortly after the holiday, in the flood of annual self-help and 2016 resolution articles. The challenge was something I was keenly interested in, then promptly forgot about in the chaos of startup and family life.

megamaker-logo-3d

The 2016 Super Mega Awesome Maker Challenge

If you haven't heard of it, the Maker Challenge is a quest aimed at pushing Makers to make more. It encourages you to curate a list of your ideas, tracking your progress through the year. Folks serious about the challenge publish their lists too, which keeps them accountable to their goals.

It's funny: the idea of making a list of ideas public seems so counterintuitive, as in the business of software we under-promise and over-deliver. Talking about something too early can set the wrong expectations, especially as a project changes. I've gotten in the habit of never talking about things before they're almost done, and so I en end up sitting on dozens of ideas for smaller projects that few people hear about.

The Maker list isn't all about huge things, it's about making and finishing things. The projects are contained and concrete, almost like exercises and challenges. The lists I've seen remind me of what writers often do, forcing themselves to write short stories, or to make a habit of entering writing contests. The idea seems to be about pushing to do something, and to vary what those somethings are. The goal is twofold: both to make stuff, and to be making stuff.

Thinking about making and making my smaller projects public is somehow exciting.

The list starts here

Project #0 was to move the list to a Google doc:

2016 project list

I will update the list as I work through it. I'm using the list to obsess over making stuff, tracking ideas, and getting motivated to do.

#Weblog

I've noticed an interesting shift in computing over the last few years. It's one of those changes that becomes obvious in hindsight, though while in the moment it was confusing and disorientating. It turns out I was blinded by my own path of coming up in the industry.

I'm never surprised when a bias gets in the way of seeing something. It's a constant of human progression and the fundamental reason we science. We're aware of our limits, and we pursue knowledge from the perspective of disproving ourselves so that we can uncover the truth despite the limits of our ability to observe and think. Science is cool.

That same set of filters are much more difficult to apply to industry trends, as the facts are fluid, and the reasons for those facts are based on a complex set of interactions between people (who themselves are gooey masses of crazy).

The search for sparkles ✨

Finding talent in computing has always been difficult. There is a large gap in how well people perform while building software, where one developer can be between 5 and 10 times more productive than another. Individuals may also perform better or worse depending on how they fit into their team, and how they mesh with the software stack in use. Measuring these differences in interviews is difficult.

I've used a number of tools when interviewing developers and have found what most researchers have found, that standard measurement tools like brain teasers aren't really that useful. I've used written tests, whiteboard tests, brain teasers, and even off-the-wall questions when interviewing people. I found that the standard tools failed to gauge actual job performance, rather it measured the ability for people to perform in interviews.

I have found a few less rigid techniques for measuring potential hires, mostly in gauging trajectory and passion. Is the developer excited about learning? Are they excited about making software? Do they actually do both of those things? Combined with the standard do-they-fit-who-we-are gut reaction, I've been moderately successful in finding some great people.

⛰ The landscape is not a constant

Now this is where it gets interesting. I've been interviewing and hiring for almost 20 years. I have accumulated a bunch of questions I ask people. And while the technologies I talk about have changed, I have always expected certain skills and behaviours from specific levels of software developers. In terms of soft skills this has been very successful, but recently I've noticed that the hard skills have dissipated.

This Spring I've been hunting for a few senior developers. I've talked to dozens of incredible people, but have been confused by the lack of experience in system design techniques. Looking back I noticed signs of it years ago, though in recent interviews it's a set of skills that have mostly disappeared.

When my own perspective fails me, I look to other pillars of the local scene. I talked about it with a good friend, asking him if he had noticed the shift in any of their recent hires. He's also seen a lack of system design understanding. We emailed back and forth for a few days and came up with some plausible reasons for the gap.

✨ Sparkles become more rare over time

We agreed that skills like system design are (and should be) more rare now than in the 90s. There are a bunch of reasons for this, but mostly it's the result of faster and larger computing resources, combined with tools that are much more capable. While this seems very obvious in retrospect, it was blindingly difficult to see while interviewing people.

Most software development problems in the 90s required some amount of system design. Resources were usually constrained by some aspect of the problem, and tools like capacity planning and careful plotting around the inherent system limits were absolutely necessary. Junior developers had to learn these design techniques to progress to mid-level developers. Senior programmers taught these skills, and were responsible in the many cases where planning failed to foresee limits that blocked the project.

As the industry matures, the risk of failure due to limited resources decreases. Additionally, the tools have progressed, reducing another common friction from 20 years ago. There are fewer times when system design is required by every team member.

And as hardware, tools, and the industry mature further, the business of software evolves with it. Many businesses now focus on little-a-agile techniques, which are iterative and fast. We call it moving fast, which is no longer the desperation of the Valley startup, but a viable approach to building software. It wasn't possible in previous eras of computing, as the technical risks required moving slowly. As those technical hurdles melt away, early startups are freed to obsess over features and utility instead of spending all of their time on figuring out if the problem is soluble at all.

We of course still do system design, though there are fewer people that need to learn it. Businesses can hold off on system level design until they're scaling, as the risks justify the tradeoff in development speed. It's a crazy and wonderful time to be in computing.

🦄 Unicorns see other unicorns 🦄

Now the question remains: how was this shift in computing knowledge surprising? I came up in the industry at a time when most of the problems required careful systems design. My incredible mentor came up in the proterozoic era of computing, where even the smallest of problems required careful design. I learned to think about software in terms of capacity, data, and flow first, using diagrams to think and communicate more clearly.

Today? Most mid and senior level developers can work through problems with less rigour. There is less need to understand how the engines work, and there are many problems that the current generations of developers will never need to think about. Questions about systems planning, measurement, and design are above the experience of these generations.

Deep knowledge has shifted up over each generation of developers and designers over the last 50 years. Deeper design knowledge is now relegated to unicorns and architects. And that's okay. We'll still use it, and we will continue to teach it as deep understanding always pays off over the long term.

#Weblog

I’m not old yet, but I’m becoming a curmudgeon. I even love the word curmudgeon, it’s a word that sounds like its meaning, with a spelling that feels all pissy and annoyed. It’s a word of mystery, and we know very little about its origin. It’s an interesting word, and interesting is good.

I’m not a writer by trade. I write of course (we all do these days), but I’m not a professional. Despite this, I feel a strong connection to our language, I feel a need to savour words and protect how we use them. I feel uncomfortable when I see writing that wanders into the safe, passive, and bull-shitty mess that I see regularly in marketing and business. It’s a lesser language: it communicates poorly, using more words, and with less inspiration.

It’s not purely a problem of passive phrases and dash-encumbered-words either. It’s a pattern that strips the passion out of what we’re saying. Maybe it’s a way of thinking that plays it safe, a way of writing that follows conventions that are easy (but weak), or a lack of editors calling us out on our shitty work. Regardless of the cause, the fix is easy: write, throw most of it out, and write some more. Pull out the crap, work harder to write honestly, and try to say things that make people feel something. Or, just maybe, perhaps, shut up for once?

I have tried to love podcasts for a few years now. There are several that I like, but I find it difficult to listen to any of them consistently. I'll binge listen for a few weeks, but for whatever reason I get stuck and move on to the next show.

There are a few things that podcasts do that frustrates me:

  • Low quality production
  • Shows that go too long or that ramble
  • Shows that take long hiatuses
  • Surplus of opinions and deficits of facts

I admit that part of the problem is the way I find, consume, and collect shows. iTunes isn't great for discovery or browsing. It does well enough at distribution, but the store is still too monochromatic for the social aspects (shut up, Ping). I end up hopping between sites to find shows, to check in on show statuses, and eventually I just don't remember to do those things. I have lost contact with a few shows due to issues with iTunes or my podcasting apps.

I know that the platform will improve, so I can live with the fragmented social, news, and show information bits. I can also live with gaps in production (TV and Netflix have me trained to binge and wait). What I can't learn to love are the crappy production things. I discovered that there are specific formats that work way better for my brain: shows that are carefully cut, curated, and polished are the ones I consistently enjoy.

There are a few podcasts that are carefully produced, which makes a world of difference.

Things that are important to keep me interested in your show

  • Planning and research keep a show focused and factual. Opinions are interesting, but facts to back up the gut-feelings are way cooler.
  • Production values can turn a so-so show into a good one. Intros, theme music, nice cuts, carefully considered segments and lengths are a few of the things that stand out in the shows I like.
  • Flattering and interesting material is better than incredible guests. I recently listened to an interview with one of my childhood heroes; the audio quality and interview focus was so poor that I lost some respect for the show and my childhood hero. I would have preferred not to hear the botched interview.
  • Broadcast chops matter. I can live with following a podcast through to maturity, but the people involved should be studying and improving. Simple things like excessive filler words (“Uh, ah”, and “like”) and low quality audio-capture really subtract from the experience.
  • Shorter is way better for some reason. Cutting out the boring parts and keeping the conversations focused keeps me listening.

The balance of production quality, focus, and length are the attributes that make the most difference to me. Facts are a close second as opinions get old quickly, and facts are so easy to check.

#Weblog

We tried something new on art day recently: printing with Lego. It has potential, even if it's a bit of a pain to clean up.

#Art #Weblog

I find that software developers struggle to sketch diagrams of their software. They get lost in the specifics of diagraming techniques, in choosing from the many available tools, or in the futility of drawing diagrams at all. I understand their pain, as there are many standards for diagrams and many (often obtuse) tools for drawing with. It's not very motivating to have a sea of choices, none of which looks particularly appealing.

I think about software design by diagramming and writing. The act itself improves the result. It forces me to decompose and organize the problem, and attempt to explain it back to myself. I have always been able to think about software through this process of sketching, refining, and describing diagrams, even when I didn't know anything about what was standard or what should be done. I started by doing what made sense to me.

Over the years, I found that people's milage varied with my diagrams and documentation. Sometimes a diagram would communicate clearly and other times it would baffle. I took the time to figure what I didn't know, reading and trying pretty much everything I could get my hands on. And oddly enough, many of my earliest diagrams were the most useful, before I started to adopt the more specific technical styles. Why was that?

There are many paths, many shoes, and many feet

There are as many diagraming standards as there are development languages. There is a common subset that is much more manageable, but it's still easy to get lost in the choices if you don't have an understanding of what the history and classifications are. It's easy to be fooled too, as even an obtuse way of drawing will start to make sense to you if you practice it often enough.

One of my early mistakes was using diagrams that didn't suite the audience, the problem, the level of detail, or even my way of thinking. A great example was when UML became popular. I drew all sorts of architecture and design diagrams using its parts, and for my own thinking it was fairly productive. But I found that these diagrams communicated poorly, as they captured the wrong level of detail for many types of conversations. They also missed other details that are important in higher level thinking about a system. It wasn't just a problem with UML, I was applying it poorly too.

It turned out I was focusing on the trees before the forest. Much of UML is, for example, great for showing the precise details of things. These detailed schematics don't always show the hierarchical or proportional relationships well, but in terms of the finer aspects of design they are great. But try to use a class diagram to explain how a larger system functions and you'll be losing out on all sorts of important hints and cues.

Now if you're detailing the design of a class library, or the interactions of a protocol, then many of the UML diagram types are great. If you're discussing features and architecture, you'll be missing important parts of your story. You need to understand what you're trying to capture and communicate before picking a style of diagram.

Designing chopsticks, a completely trivial example

Let's take a simple problem of design: how do chopsticks work? We can imagine them as a mechanical problem, as a software service problem, or as a manufacturing problem.

chopsticks

Consider the humble utensil: chopsticks. Two pieces of bamboo, plastic, or steel. Tapered. Textured. Packaged. Done.

If we were defining how to build chopsticks, we might describe what they were made of; their length, their properties, or their use. If we were coding up a chopstick abstraction for our latest game, we might draw their properties, or relationship to other utensils, or their data storage. If we were selling a web service that provided RESTful chopsticks, we might show their architecture or APIs. If we were Apple, we might describe the detailed manufacturing process that makes them what they are.

Which style of diagram would help you come up with a design for some choice chopsticks? Which would help you build them as a web service? Which would help you explain their purpose to your customers?

Feature diagrams

We could define chopsticks in terms of their utility:

chopsticks-venn

Of course this is a Venn diagram, a joke (and a bad one at that). Feature diagrams are often nonsensical, as they show features in boxes and circles. They're meaningful in terms of showing features and their relationships, but are certainly not something you could build software from.

Feature diagrams are useful, however, especially to users who want to understand your product. Finding a visual language for the people who use your software is important, as it can simplify your documentation and support. It lays a foundation of icons, flow, and terminology. Users think about what they need from the software and what they create with it. Helping them think in the language of your software can make it easier for them to become productive with it. This is good.

Composition diagrams

Or, we could define the constituent components of chopsticks:

chopsticks-comonents

These are great diagrams for understanding the gravity of features or components. It could just as easily represent the types of languages or services used, or the number of types of data in the system, or the composition of the user base.

Functional diagrams

Or we could define how chopsticks work mechanically:

chopsticks-function-venn

I use mechanical style drawings for thinking through states, sequences and componentization. This diagram, of course, is a silly summary of states in Venn form.

Functional diagrams allow you to use space and size to define how components relate. Lines and arrows can show flow, and containment can show composition. These mechanical drawings have standard forms in UML and various architecture standards, so it's worth reading up on what's done elsewhere if working with teams familiar with specific diagramming standards. You're also free to forge your own way with these diagrams, though they will work better if you rely on at least a few familiar conventions.

For sketching, obviously, you can pick and choose what works best for you. This is mostly how I think in design. I love to draw software and interfaces in terms of machines and ad hoc drawings. The freeform style is fast, expressive, and fun. These are important attributes for thinking in.

Architecture diagrams

Now if we were building chopsticks as software, we could define their architecture:

chopsticks-architecture

I love architecture diagrams. They represent the polished, high level thinking about a system. They look a lot like things you'd see in marketing materials, high level enough to be approachable, detailed enough to provide insight. You might not be able to build software directly from such diagrams, but they are great for giving people an overview of your software or service. They are especially useful when bringing on new staff at all levels, as well as showing clients, customers, and investors what you do. They provide a vocabulary for everyone to share, with a spacial sense, both visually and in terminology.

I find that architecture diagrams work best when they borrow concepts from lower level diagramming languages, but simplified and more iconic. This aids familiarity, and provides an expressive way to pack information into the limited format. It also is a way to lead people into your detailed designs, anchoring their understanding in the simpler overview.

I believe high level diagrams are the most important development artifact when they accurately represent your software and its ecosystem. They represent your core values in terms of a defined language, and they map your way through design and construction. Your teams and management run blind without this shared understanding.

API diagrams

We could also dive into more detail and define a pair of chopsticks as a web service API:

chopsticks-api

API diagrams are spacial maps of your web service components and their namespaces. They are a lower-level slice of functionality specific to the web, but similar to component diagrams. These diagrams extend the vocabulary of your software, and decompose it into spheres of influence, and layers of implementation.

I also love API diagrams, as they help me think about what goes where, and about the actions and data. Thinking about decomposition is helpful for finding holes in your understanding and for finding organization before you build out the system (or when adding to it).

There are many standard diagram styles for component diagrams, depending on your audience and specific purpose.

Lower level diagrams

As we continue to describe each part of a system, we find our way to lower level diagrams. These include things like:

  • schema diagrams (ERD, for example)
  • component and library diagrams
  • class diagrams

We also start to wonder about how these pieces interact, for which we use lower level sequence, state, and transition diagrams. Tables are also useful for states and transitions, and I have found no single approach useful for all types of detailed planning.

And everything else

When considering interactions we use wireframes, information hierarchies, sites maps, user interface mockups, and so on. These diagrams are just as important, and should also share pieces of your visual design language.

My path through the forest

I have applied many types of diagrams to various stages of the design and development process. Some have worked and others have not. Part of my learning had me figuring out what worked for my own exploratory thinking and part of it had to do with finding out how to improve what other people understood of my designs. Some of the challenge was in the type of diagram used, some of it in the level of detail, and some of it in the style applied to the drawings themselves.

If you look at my simplified examples you'll see a few common themes:

  • a shared and expanded terminology,
  • a shared and expanded set of shapes,
  • a shared and expanded set of colours

I tend to vary the colours and shapes in my design diagrams, though there are common themes. For example, I tend to use icons from standard diagramming languages in higher level diagrams. I also tend to pick fonts and colours that suit the domain or product branding, to bolster familiarity and ownership of such things. I also pack in a bit of humour, where appropriate, to keep people alert and enjoying themselves.

I find that these elements translate nicely to pencil sketches and whiteboard discussions too. If you start to work on a language for your software, your entire team will gain from the tools you have provided for them. They will be able to leverage existing definitions and extend them with their own design details.

But making diagrams is difficult

Navigating through the universe of diagramming techniques and ways of applying them is in itself a huge undertaking. Add to that the difficulty in recording and producing attractive diagrams and you have a task that seems impossible.

I'll let you in on a secret. It's really very simple. You just draw. Keep drawing, enjoy it, learn your tools, read a bit, and before you know it you'll be thinking visually.

So how do I produce diagrams? Is there a magic tool that will save us all?

I learned dozens of tools before I realized there was no perfect tool. Today I still use a variety of tools to translate what I see in my head to paper (and I, of course, also use paper).

At first learning multiple tools seemed counterintuitive. It felt like a waste of time. I mean, why learn several tools when I could learn one really well? It turned out that learning various tools taught me the fundamentals of drawing in a way that makes it simpler to use any drawing tool. It gives me choice, dexterity, and a richer palette to work from.

The principles of making technical drawings are simple, once you realize there is no magic. You just do it. You work around the limitations. You nudge things by hand. You use what the tool does well and ignore what it doesn't. For me, I would get frustrated with a tool and quit, when I really should have been frustrated at my lack of own lack of persistence. Stopping only guaranteed my failure.

Incorporating diagrams into documentation can also seem tedious. Given time, however, I found that you just do it. As my agility improved, the cost of the tedium went down. I also have a stronger vision of what I want in my head and I use that to brush off the more annoying aspects of the process. Eventually the annoying parts fade away, and you just make stuff.

Well that was long winded

TL;DR

Design is difficult at first. Too much choice, too much to learn. But, it boils down to a few principles:

  • use a level of detail matched to the audience,
  • work your way down in detail (not up),
  • develop your own visual language, using the blocks and techniques that exist,
  • stop complaining about how difficult the tools are already,
  • and just do it

In the end, defining a way to talk about your design is more important than what you don't know. Your design sketches will unify how people think and talk about your software and will lay the groundwork for to extend and improve the things you build. The learning will also speed up your own thinking about design and improve the polish of what you build.

#Weblog

Enter your email to subscribe to updates.