New technological tools often enable fresh scientific discoveries. Take the case of Antonie van Leeuwenhoek, the 17th-century Dutch amateur scientist and pioneer microscopist, who built at least 25 single-lens microscopes with which he studied fleas, weevils, red blood cells, bacteria and his own spermatozoa, among other things.
In hundreds of letters to the Royal Society and other scientific institutions, van Leeuwenhoek meticulously recorded his observations and discoveries, not always for a receptive readership. But he has since been recognised as the father of microbiology, having helped us understand and fight all manner of diseases.
Centuries later, new technological tools are enabling a global community of biologists and amateur scientists to explore the natural world of sound in richer detail and at greater scale than ever before. Just as microscopes helped humans observe things not visible to the naked eye, so ubiquitous microphones and machine learning models enable us to listen to sounds we cannot otherwise hear. We can eavesdrop on an astonishing soundscape of planetary “conversations” among bats, whales, honey bees, elephants, plants and coral reefs. “Sonics is the new optics,” Karen Bakker, a professor at the University of British Columbia, tells me.
Billions of dollars are pouring into so-called generative artificial intelligence, such as OpenAI’s ChatGPT, with scores of start-ups being launched to commercialise these foundation models. But in one sense, generative AI is something of a misnomer: these models are mostly used to rehash existing human knowledge in novel combinations rather than to generate anything genuinely new.
What may have a bigger scientific and societal impact is “additive AI”, using machine learning to explore specific, newly created data sets — derived, for example, from satellite imagery, genome sequencing, quantum sensing or bio-acoustic recordings — and extend the frontiers of human knowledge. When it comes to sonic data, Bakker even raises the tantalising possibility over the next two decades of interspecies communication as humans use machines to translate and replicate animal sounds, creating a kind of Google Translate for the zoo. “We do not yet possess a dictionary of Sperm Whalish, but we now have the raw ingredients to create one,” Bakker writes in her book The Sounds of Life.
Read it all (registration or subscription).