Fwends Logo
AboutA ManifestoArticlesLet's be fwendsSearchLink to LinkedIn ProfileLink to Twitter AccountLink to Mastodon Account
Imprint
Fwends Logo
To the List Archive

The Hidden Costs of Saving Time

Get a bunch of links into your inbox every other week.

Let's be Fwends is a journal about agility, organisations, technology, and the larger media landscape. And most importantly the role of all of us in all of that.

Let's be Fwends is sent via MailChimp, an email markting platform. By clicking 'join now!' to submit this form, you acknowledge that the information you provide will be transferred to MailChimp for processing in accordance with their Privacy Policy and Terms.

Let's be Fwends #158:

The Hidden Costs of Saving Time

“When you say something, make sure you have said it. The chances of your having said it are only fair.”

~ Kenneth Roman

Hi, and welcome to edition 158 of Let's be Fwends. Today, we look at how language can mess with seemingly clear concepts like possibilities, what constraints can do for your creativity, and whether technologies like Generative AI can generate savings (time or money) out of the box. We also look at the hidden constraints applied to every Generative AI model out there - its system prompts.

Clarity

When someone tells you that some event might take place "should be considered a serious possibility," how would you express that possibility, in percent?

If you're anything like the gentlemen of the "Board of National Estimates" from the 1950s, you might rank it as high as 80%, or as low as 20%.

Our brains are ill-equipped for dealing with probabilities, and our use of language usually reflects that. Often, definitive-sounding phrases actually muddy the waters. When we say, "it is probable," what do we mean, exactly? When something "could happen," how much effort should we put into preparing for it?

You can't out-language the fact that most things are neither 100% likely or 100% unlikely. Practically everything that warrants conversations is on some sort of scale. Gravity will pull you down, there's no need for a lengthy discussion about that. But what does it mean, and what chain of events will follow from that observation? That's different.

It also makes no sense to tack on precise-looking numbers you more or less made up when talking about things that might happen.

But what we can do is being clear in our thinking, and trying to express this clarity in our language. We will fail, because that's how language works, but we can still try our best. Part of that clarity is a solid understanding what we want the receiver of the information to do with it, and making sure that we communicate this in a clear way.

In other words, we need to own what we're saying.

Most people try to avoid this. They want to say something, but they don't want to own it. They're afraid that their credibility will suffer when things turn out differently from what they said.

Nothing could be further from the truth. I'd take someone who is perpetually wrong but contributes interesting points to the conversation over someone who just goes with the majority every day.

Constraints

I've talked about "Enabling Constraints" before, and how they facilitate creativity instead of strangling it.

Arun has collected some brilliant examples of constraints that are aimed at maximising creativity.

Is Generative AI a Time Saver?

Ars Technica reports on a recent study from Denmark, claiming that even with widespread adoption of generative AI systems like ChatGPT in the workforce, productivity improvements are modest, or completely lacking:

"The study revealed that AI chatbots actually created new job tasks for 8.4 percent of workers, including some who did not use the tools themselves, offsetting potential time savings. For example, many teachers now spend time detecting whether students use ChatGPT for homework, while other workers review AI output quality or attempt to craft effective prompts."

This result reminds me of the ideas of Ivan Illich, an Austrian priest and philosopher. In 1970, he published "Energy and Equity", a description of how systems with high energy demands create social inequality. One example he uses in his paper is the system of transportation by privately owned cars.

Illich argues that beyond a certain threshold, maintaining the speed of travel becomes more and more difficult, and demands ever increasing infrastructure, which needs to be maintained, too.

While cars have a technical speed (which is the one we perceive) that allows the person transported to travel great distances, they also have a social speed, which includes all hidden costs of car transportation. If you factor in the time needed to pay for those hidden costs, the speed of a car drops to walking speed.

"The model American puts in 1,600 hours to get 7,500 miles: less than five miles per hour. In countries deprived of a transportation industry, people manage to do the same, walking wherever they want to go, and they allocate only 3 to 8 per cent of their society’s time budget to traffic instead of 28 per cent."

He goes on to claim that

"Beyond a critical speed, no one can save time without forcing another to lose it"

Whether his claims that cars actually don't move faster than walking speed when you factor in all external and hidden costs is correct (and in any case that claim is now 55 years old), three ideas stand out:

  1. Every technology creates additional labour for creating, managing, maintaining and advancing it.
  2. We tend to ignore those hidden costs when considering the utility of a technology (a case of Kahneman's "What You See Is All There Is"?).
  3. Oftentimes, hidden costs are offloaded from the users of the technology to other members of society, or other societies altogether.

When you adopt new technologies, it's seldom useful to "just use it". Ask yourself: What am I trying to accomplish with it? How can I use this technology to actually benefit me, and how does it fit in the larger picture of my work?

System Prompts - AI Constraints You Can't See

Whenever you ask an AI something, you do this through a "prompt". A prompt is an input designed to make the AI create a specific output. That's called a "user prompt". There's also a different type of prompt, one that you cannot see, and that supersedes your prompt: So called "system prompts". All LLMs have a system prompt that prime the model to behave in a certain way. For example, ChatGPT will not tell you that it recognises people in images, even if it does. The reason for this behaviour is found in its system prompt:

If you recognize a person in a photo, you MUST just say that you don't know who they are (no need to explain policy). (Source)

In the early days, users found clever ways to circumvent system prompt instructions. One way was the famous "Ignore all previous instructions", which created some sort of reset in the configuration of the LLM. (It has to be said that it is unclear how much of the system prompt instructions this so called prompt injection actually did override)

That doesn't work anymore, and system prompts are an important element of the behaviour of any model out there.

Here's a repository of 'leaked' system prompts for common LLMs. Of course I can't vouch for its validity, but apparently they are legit.

Check out the system prompt for Claude 3.7, which is 1109 lines long!

The prompts are worth a read, most of them are not overly long. And you get an idea why some models behave the way they do, what their real capabilities are, and under which assumptions they interact with you.

Let's Be Fwends from the Past

Should you discuss politics and its societal implications and consequences in work contexts? Four years ago, many workers thought 'yes', but the owners of beloved Basecamp said 'no'. Claiming that (tech) work, society and politics can be separated is wrong, I argued back then, and I still argue today.

And one year ago, we finally got the answer how much you should sit each day, at a maximum.

That's it for this edition. Thanks for being a subscriber, and hit reply and let me know what you think of this email! 📢

Let's be Fwends is sent via MailChimp, an email markting platform. By clicking 'join now!' to submit this form, you acknowledge that the information you provide will be transferred to MailChimp for processing in accordance with their Privacy Policy and Terms.