“A cat is black.”
Native English speakers will read that sentence and infer an alternative meaning for it. We recognise that this particular arrangement of those particular words is unusual: the writer probably meant to say “the cat is black.” We know that when we say or write “a cat is,” there’s a strong chance we will finish the sentence by referring to the universal properties of the animal “cat,” and when we say “the cat is,” we’re probably going to talk about characteristics of an individual cat.
Our brains react at astounding speed to resolve the unusual sentence “a cat is black” and make mental corrections to store and refer back to later. The English language in particular is highly context-dependent, such that the word “read” may be pronounced differently only once we establish the tense (past, present, or future) in which the word is used. These sorts of contextual judgements happen on almost all words in a sentence, and they happen at staggering speeds and with varying accuracy.
These sorts of nuances make the English language particularly difficult to learn for non-native speakers, and leads to misunderstandings even amongst those well-versed in the language. As a language, the structure and grammar of English is flexible to a fault. We might be able to make some generalisations about certain words: “has,” for example, might be crudely defined as being possessive, such as “the cat has ears.” But the same word can also be used in non-possessive ways, like “the cat has jumped,” where we can no longer say the cat possesses a jump. This looseness makes it hard to make accurate predictions or rules about how a sentence is structured. However, this liberal flexibility may have some practical applications for design systems.
Given a word or small set of words, we can make predictions about what words and structures might follow. Naively, this is how the predictive text on our mobile devices works: look at the previous word, and make a prediction of what might follow. Sophisticated prediction—like that of human beings—look at entire sentences, or even passages1 of text to provide the right context and derive the correct meaning for a word. How might we apply similar predictive structure to design systems?
The design system could have some kind of learning function that is trained on existing UI and component compositions, creating a Markov chain of components and the likelihood that component A will be followed by a different kind of component.
We could imagine making use of this data in the design process. When using a component, we can take what we know about how that component is typically configured and what other components typically follow it, and offer suggestions in the design or engineering tool to help ‘complete’ the composition, or at least encourage alignment with the most common uses. If, for example, I add a text input to my design, our prediction model tells us that text inputs are typically accompanied by a button, a label, or even another input.
We could even go so far as to suggest all of these possibilities at once, showing a designer multitudes of possible next steps and allowing them to use their best judgement to decide, based on the product requirements, what makes the most sense.2
In addition to making suggestions to usher a design to completion, this tool could warn us when we do something unusual. Just as native English speakers recognise that the sentence “a cat is black” is strange, our system would know that, for example, an arrangement of a dozen checkboxes and no labels is unusual. This sort of feedback about unusual arrangements of components could be especially helpful for teams that don’t have dedicated design resources.
Of course, in language, words or arrangements of words are only unusual during their first appearances. Over time and with increased use, new phrases come into being, and the meanings of words can change dramatically. Language evolves, and UI patterns do too: the idea here is not to impose static, hard-and-fast rules on how components should be built or arranged, but to provide suggestions based on common uses, and to adapt to emerging uses as they arise.
The next obvious conclusion I can draw about this idea is that, like how in the English language, a single word or short phrase at the end of a chain of words can change the meaning of the preceding words entirely, so the same is true of components: until the composition is complete, we may have a hard time determining exactly how the initial components should be arranged.
This conclusion starts to peel away at one of the difficult things about Subatomic Design Systems: a property called emergence, where within systems composed of small parts, properties emerge that don’t exist on the individual parts. The whole becomes greater—or at least different—than the sum of its parts. The interactions between small pieces of composed systems have properties of their own. This is something I’m still trying to wrap my head around, but am excited to dive into.
The two-sentence phrase “I had a haircut this weekend. It’s really ____” has some obvious conclusions for human beings: “short,” “different,” “bad,” “good,” but for machines, naive prediction may only look at the word “really” and try to predict how the sentence may end, resulting in suggestions like “long” or “excited.” Additionally, because many predictive text engines only look at the immediately preceding word, you can end up with never-ending and nonsensical sentences just by repeatedly tapping the suggestions on your phone. Try it: open up an app that allows you to type, enter “I am” and then keep pressing the suggestions that appear in your keyboard. ↩