BENGALURU: Eight years ago, I was a curious college kid obsessed with consumer tech. As I hopped from one technology news site to another, I stumbled upon an article about a debut ‘flagship’ smartphone from a major tech company that had previously focused entirely on software.
At first glance, it seemed unimpressive – an outdated-looking device that paled in comparison to the sleek, glass-and-metal designs with edge-to-edge screens dominating the market. Yet, it had an ace up its sleeve. Despite appearing to have older-generation sensors and hardware on paper, its camera produced stunning, DSLR-esque photos.
Naturally, the tech world lost its collective mind. YouTubers raved about the device, as did ‘pro’ photographers who, until then, had turned up their noses at the idea of taking pictures with a smartphone.
But what really caught my attention in the article wasn’t the device’s impressive photography capabilities; but a quote from one of the engineers behind the phone, buried deep in the text, almost as an afterthought. The engineer explained that to capture great photos with such a small sensor, they had to ‘go beyond physics’ and use software trickery to achieve results comparable to, or even better than, a purely hardware-based setup.
The engineer went on to say that he could envision a future where smartphone software would become so advanced that sensors wouldn’t be necessary at all. You could point your phone anywhere, at anything, and using data about your location, time, and countless other variables, the phone would produce an image entirely through software – without you actually taking a picture in the traditional sense!
At the time, I struggled to envision such a future and dismissed it as the kind of wild idea you might come up with after one too many energy drinks. But over the last two years, as neural network-powered diffusion models have advanced beyond anyone’s wildest imagination (except perhaps those oddballs in Silicon Valley), I started to think that such an idea wasn’t as loony as I originally thought. Last month, I watched that same tech company unveil its latest phone model at their annual conference, and I was finally convinced.
Sure, they were supposed to be showing off all the shiny new hardware, but to nobody’s surprise, the spotlight was squarely on its software capabilities – all powered by generative Artificial Intelligence. Some features were genuinely impressive, others were kind of handy, and a few were just plain idiotic. But throughout it all, I couldn’t stop thinking about that article I had read years ago.
The new phone could not only take amazing pictures, but it could also add the person taking the picture to a group photo after the fact (solving the issue of needing to ask a random stranger once and for all). It could erase unsightly backgrounds (again, eliminating the need to beg random Photoshop experts on social media, only to receive rude pranks in return).
It could even insert completely ridiculous yet entirely believable backgrounds. Want to turn a picture of yourself sitting at home into one where you’re surfing the ocean? One tap and a few seconds, and voila. Unless you knew what to look for, you’d be hard-pressed to differentiate between the original and the not-so-original.
And these are just the tip of the iceberg. At this rate, with tech companies knowing more about us than we’d like to admit, the idea of capturing a photo without an actual camera sensor doesn’t just seem possible; it looks like it’s already a thing.
(The writer’s views are personal)