It takes me a long time to write a book. I have excuses, but that is neither here nor there. I write about future technology that might be just around the corner. This means I run the risk of technology proving me right, which means it is sci-fact not sci-fi. Or it proves me wrong, which is just plain embarrassing.
I read William Gibson’s Neuromancer in 1986 and remember how cool I thought the opening line was. “The sky above the port was the colour of television, tuned to a dead channel.” I’m not sure a teenager nowadays would know what that even means. The book envisaged cyberspace, but still thought we’d have dead tv channel noise?
All fiction run this risk of course. They are shaped by the world around them and may age quite poorly, but in science fiction this stands out like giant Kodak moments and could make a reader give up altogether.
Maybe I should start writing witty romantic novels instead? After all, whether Elisabeth should get Mr Darcy or not in Jane Austen’s Pride and Prejudice might be up for debate, but it can’t be scientifically proven either way.
It occurs to me that a social media post runs the same risk, but for a different reason. We want to be first to comment on the next technological wonder or news article. Provide what we believe is our unique insight to the discourse. Verifying the truth of the information may be what stands between being first or just one of the many that comes later. No one remembers who came in second place after all.
I think we are now well and truly in an age where it is more important to be first, even if you are wrong. So where does this lead? Yuval Noah Harari warns us in his book Nexus that we have a very naïve view of information. That we think the more information we have the better off we’ll be. The closer to the truth we are. But how can that be when most of the information generated is at best incomplete or at worst knowingly incorrect, aimed to deceive?
If we draw my favourite topic, AI, into this, what happens when all this junk information become input for AI to create even more information? Where will that recursive loop of ever junkier information take us? And is there even an exit criterion, a way out, once that loop begins?
I’m hoping, with very little to base it on, that AI could become the mechanism used to bring order out of this chaos. I wonder if, by doing so, I am just providing a bit more junk information to the planet-sized mountain we already have.
Comments