Home    All Articles    About    carlos@bueno.org    RSS

Pre­dict­ing the Fu­ture

Late­ly I've been using a de­fini­tion of the fu­ture that seems to kick up in­terest­ing ideas. It's not origin­al, but it is use­ful:

The fu­ture is a dis­ag­ree­ment with the past about what is and is not im­por­tant.

The dif­fer­ences bet­ween two ages are in­for­med by politics, tech­nology, de­mog­raphics, etc. But they are eas­iest to un­derstand in terms of what each age thinks is worth car­ing about.

Know­ing who your parents are is less im­por­tant today than a co­u­ple hundred years ago. It doesn't crip­ple you social­ly as it once did. Know­ing what materi­al your water pipes are made of be­came sud­den­ly im­por­tant when we noticed the crippl­ing ef­fects of lead. Today we pay close at­ten­tion to lab tests and labels and in­dicator lig­hts that sum­mar­ize what's rea­l­ly going on, be­cause a lot of im­por­tant stuff is too small or com­plex to com­prehend di­rect­ly. If I had to pick a major dis­ag­ree­ment bet­ween the pre­sent world and the past, it would be the im­por­tance of in­visib­le amounts of mass and en­er­gy, be they trace chem­icals or trans­is­tors. Moreov­er we tend to care about em­er­gent in­for­ma­tion con­tent, the pat­terns in the materi­al, rath­er than the ac­tu­al materi­al.

To a typ­ical Vic­torian that would­n't be heresy so much as fan­tastic non­sen­se. Your great-grandparents' world was populated by peo­ple, an­im­als, and human-scale ar­tifacts. Man was more lit­eral­ly the measure of all th­ings. Im­por­tant th­ings were as­sumed to be big and ob­vi­ous, or at least visib­le to the sen­ses. Germ theo­ry was one of the first ideas to break that as­sump­tion in a seri­ous way. It didn't help that its com­pan­ion idea, vac­cina­tion, was even weird­er.

The point is that if you want to do truly "futuris­tic" work, you can't just extra­polate from what is be­lieved right now. You have to sup­pose that at least one as­pect of our worldview is wrong: some­th­ing we hold dear is not ac­tual­ly im­por­tant, or there is some­th­ing else that should be, or both. If you im­agine peo­ple feel­ing and act­ing ex­act­ly the same way as they do now, that's not the fu­ture. That's just later on. Also re­memb­er that the fu­ture is not an ever-upwards spir­al. Who would have pre­dic­ted in 1930 that mass slave­ry would re­turn to Europe?

The flip side is that these dis­con­tinuit­ies make pre­dict­ing the fu­ture hard. The clues to how we may think are camouf­laged among thic­kets of es­tablis­hed fact. You have to isolate as­sump­tions un­der­neath our view of the world, alter them, then look around to see what chan­ged. The de­ep­er and more un­spok­en the as­sump­tion, the great­er the poten­ti­al for chan­ge. On the sur­face it sounds like a pre­tty stupid way to spend your time: dis­as­sembl­ing ideas that aren't brok­en in order to dis­cov­er ways to break them. On the other hand, that's pre­cise­ly what tink­ers do with mac­hines: break them in order to un­derstand and im­prove on them. That is, I think, what is meant by the motto "the best way to pre­dict the fu­ture is to in­vent it". The in­iti­al spadework is the same wheth­er you are pre­dict­ing or in­vent­ing.

You needn't court-martial every­th­ing you think you know, but you do need ways to ident­ify sus­pici­ous areas, and tools to dig de­ep­er. So what might they be? There is one that I was mighty proud of in­vent­ing, until I de­scribed it to a Lit­era­ture major.

Popular cul­ture as a lens

One way to find these un­spok­en as­sump­tions is to ex­amine the treat­ment of a futuris­tic sub­ject in popular cul­ture, pick out the com­mon themes, and ask wheth­er they make sense. Popular cul­ture is pre­tty good for this kind of prac­tice, though what you get out of it may not be earth-shattering. A fic­tion writer's stock-in-trade is ideas that feel right but are not neces­sari­ly bac­ked up by evi­d­ence.

For ex­am­ple, sci­ence fic­tion over­flows with sto­ries about ar­tifici­al be­ings. What do they have in com­mon? Well, a com­mon trope is that ar­tifici­al in­tel­lig­ences are suf­ficient­ly human de facto, es­pecial­ly if the story's con­flict is about their status de jure. It's rare to find a story about AI which con­cludes that they are ut­ter­ly and forev­er alien. Win­termute em­ployed syn­thetic em­is­sa­ries, and spoke casual­ly about its motiva­tions and de­sires. Agent Smith is ex­plicit­ly Neo's mir­ror image. HAL9000's voice was as warm and sooth­ing as a late-night radio host's. The golem of Jewish folklore is an in­terest­ing case. They are humanoid but ex­plicit­ly not in­tel­ligent. Their moral posi­tion is some­where bet­ween djinns and power tools. Terry Pratchett ex­plored the idea of in­tel­ligent golems in "Feet of Clay", and it tur­ned out ex­act­ly as you ex­pect it would: after many mis­un­derstand­ings they as­sert their rights and join the mainstream of society. It's hard to shake the feel­ing that AI sto­ries are most­ly al­lego­ries for raci­al in­teg­ra­tion.

We seem to have a hard time com­ing to grips with in­tel­lig­ence that does not have a face or a voice. When we say "in­tel­ligent" in casu­al speech, we most­ly mean "a bloke I can have a con­ver­sa­tion with". It may turn out that ar­tifici­al in­tel­lig­ences are not able to evoke em­pat­hy from or ex­peri­ence em­pat­hy for natur­al born humans. I'm not sure I like the idea of what hap­pens when we com­pete with such be­ings for re­sour­ces.

The phrase "ar­tifici­al in­tel­lig­ence" it­self may har­bor simplis­tic as­sump­tions, like call­ing an X-ray mac­hine a "magic lan­tern". It's not a bad an­alogy, but it mis­ses im­por­tant th­ings like how X-ray waves are generated, other spe­cies of ex­otic radi­tion, and how too much of them will kill you. Im­agine a pas­sive fab­ric of know­ledge that, when and only when di­rec­ted by a human, ac­complis­hes super­human feats. The con­flict would not be over the human­ity of the en­t­ity in ques­tion, but over which humans con­trol it.

So, an un­origin­al though in­terest­ing pre­dic­tion: the idea of ar­tifici­al in­tel­lig­ence hav­ing a dis­tinct per­sonal­ity with re­cog­niz­able motiva­tions and de­sires is attrac­tive, but there is lit­tle evi­d­ence that it must happ­en.

Sub­tex­tu­al sub­vers­ion

Fans of hard sci­ence like to de­ride De­rrida for being pseudo-intellectual, but this lit­era­ry met­hod is more or less what he was talk­ing about. My wife was very pleased to point that out when I tried to pass it off as my own in­ven­tion. De­construc­tion­ism got lost in the weeds be­cause they don't use rea­l­ity to ver­ify their theo­ries, but the basic met­hod seems sound. Can you use it on other bod­ies of lit­era­ture, not just fic­tion?

Over the last ten years, the field of data-mining shif­ted its focus to gat­her­ing en­ough of the right data in­stead of just ever more-clever al­gorithms. I re­memb­er rea­d­ing a lot of pap­ers on auto­matic text clas­sifica­tion in the late 1990s. They drew from a small pool of datasets, such as a col­lec­tion of news ar­ticles. In­nova­tion hap­pened in the al­gorithms. This was a rea­son­able idea. Data was hard to come by, and using the same datasets seemed like a good way to com­pare the per­for­mance of dif­ferent al­gorithms. The un­der­ly­ing as­sump­tion was car­ried over from other fields of com­put­er sci­ence: given a re­presen­tative sam­ple of data, the way for­ward is to come up with more sop­histicated al­gorithms.

Re­search­ers at Goog­le were the first I know of to de­monstrate value in the op­posite: dumb al­gorithms ex­ecuted over gigan­tic datasets. They came to this op­in­ion be­cause they had so much dam­ned data that it was hard en­ough to count it, much less run O(n³) al­gorithms over it. So they tried dumb al­gorithms first, and they wor­ked sur­prising­ly well. Older, naive al­gorithms tur­ned out to be per­fect­ly valid; they just needed ord­ers of mag­nitude more input than had been pre­vious­ly tried. I would not be sur­prised to learn that sever­al peo­ple thought of this idea early on. I don't know en­ough about the field to say.

It's pos­sible you would have hit on the same idea, if you'd an­alyzed the lit­era­ture for un­spok­en as­sump­tions. Or, like Goog­le, you could have played with big pro­blems and new tech­nology while under the gun to pro­duce some­th­ing use­ful.

Adopt early and often

Rubb­ing up against the new is an­oth­er way to glimpse the fu­ture. Just as a child is im­mer­sed in a cul­ture and then later de­rives the rules which shape it, early adopt­ers im­mer­se them­selves in new ideas and new tech­nology in order to puzzle out the shape of the fu­ture.

Some peo­ple are natur­al early adopt­ers. A friend of mine is busy build­ing an electronic li­bra­ry and giv­ing away his phys­ical books. In eight months he bur­ned through two Kindles and an iPad, and I added a lot of his books to my shel­ves. The funny thing is that both of us genuine­ly feel we're gett­ing a good deal: he is di­vest­ing his burd­en of dead trees and space, and I am sav­ing per­fect­ly good books from futuris­tic folly.

Ours is a clas­sic future/­past dis­ag­ree­ment. He thinks it's bet­t­er to move to this new new thing and see how it works. Even­tual­ly books will be pub­lished on-demand and kept up-to-date just as web­sites are. Paper will be an opt­ion, and not the most popular one. I think that paper is ac­tual­ly a pre­tty good medium for archiv­al storage. In­dividu­als should act in con­cert to pre­ser­ve as much as we can, as more and more of our cul­ture be­comes digital-only. I don't know which of us will be right, or both, or neith­er.

My friend's way to pre­dict the fu­ture is to sur­round him­self with new tech­nolog­ies. The new often em­bod­ies up­com­ing dis­ag­ree­ments with the pre­sent. You still have to do the work of isolat­ing the as­sump­tions it breaks, and de­cid­ing wheth­er they are cor­rect. For whatev­er rea­son I don't have the tem­pera­ment for this. My met­hod is to sur­round myself with early adopt­ers, and watch what they do.

Bring it home

Pic­ture your­self as you were ten years ago. List five th­ings that are dif­ferent about you now. I'll bet money that most of them are dif­fer­ences in your at­titude towards the world. Now pic­ture your­self ten years from today. You pro­bab­ly im­agine ex­tern­al qualit­ies: you will be more suc­ess­ful, or re­laxed, or in Al­as­ka. But most li­ke­ly the bi­ggest dif­fer­ences will be in­tern­al, what Fu­ture You thinks is truly im­por­tant. It's hard to pre­dict ex­act­ly what would chan­ge. If you knew that, you'd al­ready be on your way to be­com­ing the Fu­ture You.