
Context Technology and societal shifts operate hand-in-hand to shape the world around us. This report ismeant to highlight the innovation happening at thecutting edge across some of the most criticalcategories that are doing the shaping. Contrary is a talent and research-driven investmentfirm. We believe that at the root of every iconiccompany is one thing: extraordinary people As extraordinary people build companies,they canvas for opportunities that lie at the cuttingedge of either technological or societal shifts. and society can drive both positive and negativeoutcomes. In this report, we explore both. Cutting Edges Technological 3Artificial intelligenceCompute[食Energy@TransportationManufacturingBiotechMedicineCommodities Co Digital EngagementGlobal EngagementRelationshipsPhysical LivesEntertainmentProfessional Livelihood Cutting Edges of Societal Change ArtificialIntelligence Frontier ApplicationsScaling I. ARTIFICIAL INTELLIGENCE FoundationModels Frontier ApplicationsScaling Large language models, based onthe transformer architecture, arethe first Al models to understandlanguage, our medium forencoding all human knowledgeThe size, performance and wideapplicability of these modelshave led researchers to beginnaming them foundation models. Test scores of Al systems on various capabilitiesrelative to human performance The transformer, a neuralnetwork architecturedeveloped in 2017 capableof building contextualawareness as it processedtext, far outperformed allother architectures when itcame to learning language. Artificial intelligence: Performance on knowledgetests vs. training computation And yet, the "bitter lesson" ofAl research, as articulated byRich Sutton, is thatregardless of architecture,'general methods thatleverage computation areultimately the most effective.and by a large margin." Number of parameters of notable machinelearning models by sector, 2003-23 As a result, models havegrown massively in size,increasing by four ordersof magnitude between2018 and 2022 As models got larger, notonly did their performanceimprove, but they beganexhibiting emergentbehaviors - abilities theywere not taught explicitly I. ARTIFICIAL INTELLIGENCE Frontier Applications Frontier ApplicationsScaling Protein folding prediction accuracy Large models are increasinglycapable of understandingmore than just language, butthe patterns underlying avariety of complex domains. In2021, DeepMind's AlphaFold2is considered to have solvedthe protein folding problem. Recently,new models havealso pushed forward ourunderstanding of chemistry.In 2023 DeepMind's GNoMEtool produced 2.2 million crystalstructures that didn't existbefore, including 380k that arepredicted to be stable andusable in future technologies. Large language models areallowing robots to buildintuition around structuringand automating tasks, asdemonstrated by Google'sSayCan algorithm here. I. ARTIFICIAL INTELLIGENCE Scaling Frontier Applications- Scaling Projections of data usage(for high-quality language data)of words (log) However, there are a fewmajor headwinds impedingthe continued exponentialscaling of large languagefoundation models. Thefirst is the decreasingavailability of high-qualitylanguage data. The second is thegrowing cost of traininglarger models due tothe increasing scale ofcompute required. Estimated energy consumption per requestfor various Al-powered systems Third is the increasingamount of energy requiredto run ever larger models.Already, Al could be on trackto consume as muchelectricity as all of Ireland, i.e.29.3 terawatt-hours per year. Compute GPUs II. COMPUTE Moore's Law Progress of miniaturisation With comparison of sizes of semiconductor manufacturing process nodes with some microscopic objectsand visible light wavelengths The story of improvingcomputation is a story ofminiaturization. However, continuedminiaturization isincreasingly costly As a result, the cost pertransistor has stoppedfalling. In fact, it has begunto gradually increase."Moore's Law is dead" saidJensen Huang in 2022adding “the idea that thechip is going to go down inprice is a story of the past." As miniaturization reachedits limits, everything fromclock speeds to energyefficiency and single-threadperformance have stalled.Chip architects beganadding more cores toimprove CPU performance. There are still immenseefficiency benefits fromkeeping computation asphysically close together aspossible. This is why, despitethe growing costs, fabscontinue to research how toscale down transistors further- with the goal now at 1nm. II. COMPUTEGPUs For computation that can beprocessed in parallel, likematrix multiplication requiredfor Al training, GPUs havedelivered a 1000x increase inperformance over single.threaded CPUs. Without tailwinds fromminiaturization, the industryhas looked elsewhere to findimprovements in performance,like improved architecture andlarger processors. The rate of GPU performanceimprovement is now on parwith Moor