<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/home/static/styles/pretty-feed-v3.xsl" type="text/xsl"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0">
  <channel>
    <title>Interconnected</title>
    <link>https://interconnected.org/home</link>
    <description>A blog by Matt Webb. My notebook and space for thinking out loud since February 2000.</description>
    <copyright>Copyright © 2024 Matt Webb</copyright>
    <docs>http://www.rssboard.org/rss-specification</docs>
    <language>en</language>
    <lastBuildDate>Thu, 15 Feb 2024 23:30:33 +0000</lastBuildDate>
    <pubDate>Thu, 15 Feb 2024 18:34:00 +0000</pubDate>
    <item>
      <title>New app! A compass that points to the centre of the galaxy</title>
      <link>https://interconnected.org/home/2024/02/15/galactic-compass</link>
      <description><![CDATA[<div>
<p>Hey I made an app! It’s a green floating arrow that always points to the middle of the Milky Way.</p>
<p>i.e. 26,000 light years towards the supermassive central black hole, Sagittarius A*.</p>
<p>You can have it too!</p>
<p><strong><a href="https://apps.apple.com/gb/app/galactic-compass/id6451314440">Download Galactic Compass from the App Store.</a></strong></p>
<p>BUT: I don’t know how to write apps.</p>
<p>And yet here we are!</p>
<p>Let me explain.</p>
<p><img alt="" src="https://interconnected.org/home/static/content/2024/02/15/galactic-compass.jpg" /></p>
<h3>Cultivating a sense of the galactic centre</h3>
<p>It’s remarkably grounding?</p>
<p>Once upon a time I trained myself to always know where to look, and the centre of the galaxy moves of course over the day and the year: <em>"So I would end up pointing through the pavement, or down a street, and thinking, huh, that’s where it is."</em></p>
<p>It is a worthwhile super-sense:</p>
<blockquote>
<p>Eventually then I had this picture of myself, and the Earth, and the solar system, and the centre of the galaxy which had initially been whirling round me, and now it had flipped, <u>I was turning around it.</u></p>
<p>It was wildly situating.</p>
</blockquote>
<p>I’ve lost the intuition now, sadly.</p>
<p>The above description is from <a href="https://interconnected.org/home/2021/06/30/galaxy">my 2021 writeup</a> which I conclude by saying:</p>
<blockquote>
<p>In my imagination I see an iPhone app which displays a 3D model, connected to the gyroscope and the compass and the GPS. …</p>
<p><u>But there are slightly too many things I would need to learn</u></p>
</blockquote>
<p>So I couldn’t built it.</p>
<p>EXCEPT.</p>
<p><em>Now there is ChatGPT.</em></p>
<hr />
<h3>Developing an app with ChatGPT</h3>
<p>I can’t write Swift (the language used to code iOS apps).</p>
<p>But what I am able to do is break up large problems into smaller, expressible problems, and then sequence them.</p>
<p><strong>I’ll be detailed about this.</strong> When I’ve walking folks through this, they’re often interested so it is (perhaps) non-obvious?</p>
<p><em>If you’re not interested in the detail, skip to the next section.</em></p>
<p>I started by installing Xcode and setting up a git repo. I know how to do that. (GitHub Copilot doesn’t work in Xcode by the way.)</p>
<p>To get going, I said to ChatGPT 4 something like:</p>
<ul>
<li>I’m building an iPhone app using SwiftUI. I have installed Xcode version X. Please walk me through creating a new iOS app with a single screen. The screen should be blank except for a line of text in the middle that says “Hello, World!”</li>
</ul>
<p>Then I followed the instructions.</p>
<p>There was lots of interaction like: <em>okay I’ve done step 1. I’m on step 2 but I can’t see the X, or I have the error Y, what should I do?</em></p>
<p>I know, from other coding, that I want to have my build working as early as possible.</p>
<p>My next question to ChatGPT was something like:</p>
<ul>
<li>Now I want to see my development app running on my phone as I work. Please walk me through that.</li>
</ul>
<p>Ok, now I’ve got a setup which means I can develop and I can test.</p>
<p>Now putting together the app itself is <em>not</em> about describing the overall app. I don’t want ChatGPT to be overfaced.</p>
<p>I worked in steps at this kind of resolution, making sure each step was complete before moving to the next:</p>
<ul>
<li>Okay now add two tabs at the bottom. The tabs are called Compass and Debug. Each has an icon. The first tab show should the Hello World screen, and the second tab should have the word “Debug” in the middle</li>
<li>We’ll work on the Debug screen. Add a section of text rows that simply say A, B, and C. Use standard iOS components. Ok, now add a label at the top. Make the text smaller. Make it capitalised.</li>
<li>Add two rows, latitude and longitude, based on the device location. Add the device heading.</li>
<li>Track the device motion and add rows for pitch, roll, and yaw.</li>
</ul>
<p>Then I found a Swift-compatible library to translate between galactic coordinates and relative coordinates. (Ultimately I need altitude and azimuth, a way of pointing at a position in the sky, based on the current time and location.) I’m using <a href="https://github.com/onekiloparsec/SwiftAA">SwiftAA</a>.</p>
<ul>
<li>I’m using SwiftAA. Please make a new Swift object that takes the current date and device location, and provides the azimuth and altitude of the galactic centre (I looked up the coordinates of the central black hole as a proxy)</li>
<li>Using the new GalacticCenter object, display azimuth and altitude in a new section on the debug screen.</li>
</ul>
<p>I retained the Debug tab in the shipped app so you can see.</p>
<p>So that’s all the astronomical stuff done.</p>
<p>You never want to give ChatGPT big goals where it has to figure out the way on its own. Then both of you will be confused. Intermediate stepping stones and being sure of your boots with each stride, that’s the way.</p>
<p>Now we build the rotating arrow:</p>
<ul>
<li>Ok now we’re on the Compass screen. Make a SceneKit view with a cube in the middle over the whole screen</li>
<li><em>(There was a whole lot of back and forth here to fix scrolling issues, ensuring the tabs were tappable, positioning some text over the bottom, and so on.)</em></li>
<li>Now let’s make a green arrow from an extruded rectangle and squashed pyramid. The arrow should point to the top of the screen</li>
<li>Break out the data from the Debug screen into a separate object so both tabs can use it</li>
<li>Assuming the phone is lying flat. Make the arrow point north</li>
<li>Rotate the arrow in 3D in real-time in response to the device orientation so that it always points north</li>
<li>Instead of pointing north, point the arrow at the altitude and azimuth of the galactic centre</li>
</ul>
<p>This now became pretty tricky because I had to learn about how to combine rotations. I barely know anything about quaternions, so there was a bunch to learn here.</p>
<p>ChatGPT, being a large language model but lacking embodiment, is awful at 3D maths and reference frames.</p>
<p>Finally I…</p>
<ul>
<li>Asked ChatGPT to walk me through the process of building the app using Xcode Cloud and distributing it on TestFlight</li>
<li>Shared the test app with friends to ask for their help with rotations.</li>
</ul>
<p>Galactic Compass is still pretty janky, to be sure.</p>
<p>But it ain’t bad for a collaboration between someone who can’t build apps and an AI that is barely a year old.</p>
<hr />
<h3>“An app can be a home-cooked meal”</h3>
<p>Ethan Mollick and a team of social scientists studied a group of management consultants using AI.</p>
<p><a href="https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged">The headline is that, yes, AI results in better work.</a></p>
<p>The fascinating buried result is that the biggest effect is felt by the <em>bottom-half skilled participants.</em></p>
<p>i.e. if you’re sub-skilled then you can use AI to drag you up to median.</p>
<p>Now, none of us have just one skill. Like most people, I have a mix.</p>
<p>But now I’m a reasonable engineer, an amateur designer, an ok systems thinker, ok at having ideas, and now a midwit <em>everything</em> when it comes to all the actual skilled tasks.</p>
<p>And the combination means I can bring ideas to life that simply wouldn’t be possible if I had to persuade a designer or engineer buddy to help me out. Being able to bring ideas to life means I can scaffold up to other ideas… and others…</p>
<p>Like this galactic compass.</p>
<p>Back in 2020, Robin Sloan said that <a href="https://www.robinsloan.com/notes/home-cooked-app/">an app can be a home-cooked meal</a>. It’s such a memorable perspective, and what we should aspire to from our software.</p>
<p>Now I’ve cooked a meal that anyone with an iPhone can download. Probably only a couple dozen people will want it, but I want it in my pocket, and I want to share it with my friends, and here we are.</p>
<p>And I can’t even cook!</p>
<p>But I know where the centre of the galaxy is, even so.</p>
<hr />
<p>Galactic Compass links:</p>
<p><a href="https://apps.apple.com/gb/app/galactic-compass/id6451314440">Download from the App Store.</a></p>
<p><a href="https://www.actsnotfacts.com/made/galactic-compass">Project page on Acts Not Facts.</a></p>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2024/02/15/galactic-compass</guid>
      <pubDate>Thu, 15 Feb 2024 18:34:00 +0000</pubDate>
    </item>
    <item>
      <title>The startling mundanity of robot cars</title>
      <link>https://interconnected.org/home/2024/02/07/cars</link>
      <description><![CDATA[<div>
<p>It’s pretty clear that autonomously driving robot cars will be everywhere sooner or later – the question is when.</p>
<p>I had my first ride in a robot cab! I’m in San Francisco this week. And then another experience yesterday, in a different way. Let me tell about them.</p>
<hr />
<p><strong>Day 1.</strong></p>
<p>My friend and <a href="http://plasticbag.org/archives/2024/01/how-threads-will-integrate-with-the-fediverse/">social technology expert</a> Tom Coates took me in a Waymo robot cab up to Twin Peaks to see the city.</p>
<p>I mean - beautiful to an absurd degree. Downtown towers lit in the evening sun; a blimp floating over the Golden Gate Bridge; a rainbow joining the peak of the hill parks to the low bruised clouds.</p>
<p><a href="https://www.instagram.com/p/C3A8gRGptHQ/">My photos on Insta barely capture it.</a> What you’ll also see there is a video of the steering wheel of the robot car moving on its own, as it drove us down the hill.</p>
<p>It wasn’t a great first ride, on the way up.</p>
<p>We approached a school bus and blocked the road. The Waymo didn’t leave room. While we were waiting, the school bus drive waved at me to get out of the way. I sat in the front passenger seat. I gesticulated at the empty driver’s seat –</p>
<p><em>It’s empty!</em> (I tried to say by waving my arms.) <em>I’m in a haunted car! I can’t tell the ghost what to do!</em></p>
<p>It reversed eventually.</p>
<p>Only to get frustrated at a slow cyclist, avoid them by turning left down a street which turned out to be closed to traffic, and then it panicked: the Waymo came to a halt in the middle of the lane, turned on its hazards, and then… nothing.</p>
<p>What happened next was reassuring. Tom called customer service and someone took over, turned us round, and we set off again. There were a couple of other erratic moments.</p>
<p>The ride back down from Twin Peaks had no sweaty palm moments at all, so I got to observe.</p>
<p>The dash of the car has a screen with a graphic that shows what the  car can see: it’s a cartoon of the room, coloured blocks being other vehicles, and dots being people.</p>
<p>The way Waymo spots people is not magical. It sees the same people that I do. (So if a person vanished behind something, the dot disappears; no object permanence).</p>
<p>Only it sees much quicker, and all at once. On one particular intersection the car was aware of maybe 20 people. I had to look around to double-check I had seen all the same people.</p>
<p>Now, my mental model of attention is a pyramid. The peak is focus; I can focus on one object at once. Below that are things I’m attending to, maybe a half dozen. Below that I have awareness. Objects move up and down the pyramid, deliberately and automatically.</p>
<p>The Waymo has a broad base of <em>awareness</em> that is bigger than mine. It has more a great attentional bandwidth than humans. This is unusual.</p>
<p>So, I had time to have these observations. It felt very quickly very <em>normal</em> to be in this robot car with the empty driver’s seat.</p>
<p><em>(btw I suspect robot cars will have empty driver’s seats mandated by legislation for a long time; it’s a great visible symbol of the “mind” of the car.)</em></p>
<hr />
<p><strong>Day 2.</strong></p>
<p>On the following day, a very different interaction with a robot car.</p>
<p>I was driving down the I-280 and had just gotten into enjoying the sun, the scenery and the music. Cruising at 60mph, a (human-driven) car crosses directly in-front of me, swerving across 2 lanes having almost missed their exit, simultaneously slowing to about 20mph.</p>
<p>I hit the brakes hard and luckily there was some distance to the next car behind me: I didn’t hit the erratic driver, and nobody hit me. Phew.</p>
<p>Later I saw a Waymo in one of the middle lanes at 60mph.</p>
<p>I don’t know if it was in autonomous mode, maybe it had a driver taking it back to base.</p>
<p>Which car would I prefer to be driving near to on the interstate, the human or the robot?</p>
<p>Well.</p>
<p>I can mentally model the robot car way better than other human drivers. I can “contain” my theory of mind of the Waymo.</p>
<p>The Waymo is consistent, safe and dumb. Faced with almost missing its exit, the Waymo would re-route. It wouldn’t swerve.</p>
<p>It’s wild how quickly I went from: oh this will be a white knuckle ride. To: yeah ok, this fits right in, actually I’d prefer to drive near these.</p>
<p><em>(Now the empty driver’s seat is also the removal of somebody’s job. I’m not thinking about that particular social impact here.)</em></p>
<hr />
<p><strong>What I’ve learnt this week is: it’s clear that this works.</strong></p>
<p>At some point robot cars will be coming onto our roads as fast as they can manufacture them.</p>
<p>This works, whatever “this” is…</p>
<p>…because what “this” is is not clear. It’s not merely robot cars.</p>
<p>Like many things in Silicon Valley, the overall “this” is broad and obscure with scaffolding papered over with money and extraordinary effort. What is real?</p>
<ul>
<li>“This” could be robot cars PLUS many, many drivers in offshore call centres with Xbox controllers remotely piloting the car at any semi-tricky point. A wage arbitrage at best; economically unsustainable at worst.</li>
<li>“This” could be robot cars PLUS a required millimetre-precision 3D model of the entire operating environment with wildly detailed statistics about expected environmental behaviour.</li>
</ul>
<p>Whether or not our streets are full of robot cars in, say, 2027, with millions of the things rolling off the production lines depends on which “this” is the reality.</p>
<p>Looking at each…</p>
<hr />
<p><strong>Perhaps each robot car rests on the frequent, fractional effort of many, many remote human drivers?</strong></p>
<p>If there are many drivers, the economics won’t work.</p>
<p>So the assumption is that the artificial, artificial intelligence can and will be replaced by actual AI. But can it? Is that moment 2 years away, 5 years away, 10 years away?</p>
<p>Is that moment dependent on collecting sufficient training data, or is it a matter of a fundamental breakthrough in the way AI works?</p>
<p>So, in that scenario, we’ll be waiting an unknown amount of time before rollout.</p>
<hr />
<p><strong>Perhaps the blocker is the ultra high resolution dynamic 3D model of the city?</strong></p>
<p>Every road, every freeway, every place it might expect to see a person: a map like this is expensive to produce.</p>
<p>So ultimately this requires a financial structuring innovation.</p>
<p>A map is not low-cost, high margin software. Think of it as a large capital investment that requires maintenance, like a developer building a new city quarter. Then it produces a yield as people pay for it over many years.</p>
<p>This is not a venture-shaped investment. It’s more like private equity, or infrastructure. Someone will need to product the map and maintain it (like, half a billion dollars globally, say?) then rent it out.</p>
<p>Then the yield will need to be repackaged and sold to pension funds who require the rock-steady reliability.</p>
<p>I understand that financial structuring is how EV charging networks rolled out in the Nordics in Europe. This is half-remembered, so apologies, but I believe the breakthrough was to take the company building the EV charging network and de-merge it into two entities.</p>
<p>One, an entity that rents the land and owns the physical infra of the charging points, and buys and sells the electricity. This is a private equity shaped, capital intensive, yield operation.</p>
<p>The second entity owns the customer relationship. It’s mainly software. It innovates on brand, reaches customers with electric vehicles, innovates on pricing and bundles. It’s venture-shaped.</p>
<p>Separated, these two business types can flourish.</p>
<p>In the case of building the 3D operating environment model, it’s like building a chip fab or any other long-lived high-cost asset. The blocker for PE investment is sustainable demand for the map.</p>
<p>So demand can’t just come from Waymo running a taxi operation. There need to be many companies offering differentiated use cases: yes cabs and ride sharing, but also commercial fleets; autonomous driving as a feature in individually owned vehicles; last-mile delivery operations; and so on, all paying per-use.</p>
<p>So that will take a period of time to develop too.</p>
<hr />
<p>Downstream questions: local legislation, and route to market and supply chain (like, will existing car companies license Waymo’s technology or assemble their own). I feel like the determining points are the ones above.</p>
<hr />
<p><strong>My takeaway.</strong></p>
<p>“This” works.</p>
<p>Robot cars work, feel normal, and are even preferable to be in and to be near! There will be very quick cultural acceptance and there’s already cultural readiness. It’s a matter of time. I didn’t take that as a given; that was a surprise to me.</p>
<p>I would <em>love</em> to know what the actual blocker to immediate global rollout is. Those were my guesses.</p>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2024/02/07/cars</guid>
      <pubDate>Wed, 07 Feb 2024 19:22:00 +0000</pubDate>
    </item>
    <item>
      <title>Poem/1: 48 hours on Kickstarter</title>
      <link>https://interconnected.org/home/2024/02/01/kickstarter</link>
      <description><![CDATA[<div>
<p><img alt="" src="https://interconnected.org/home/static/content/2024/02/01/poem1-shelf.jpg" /></p>
<p>Poem/1 is on Kickstarter! How has it been going?</p>
<p>I’m not going to go into the whole history here… I made my AI clock prototype – telling the time with a new poem every minute, every day, composed by ChatGPT. It made me laugh, I tweeted the pics, it went viral. So after many twists and turns, I pinned down a route to manufacture, and at 10am London time on Tuesday I launched it on Kickstarter to fund production.</p>
<p>I have been a <em>nervous wreck.</em></p>
<hr />
<p><strong><a href="https://www.kickstarter.com/projects/genmon/poem-1-the-ai-poetry-clock">The whole story is on the Kickstarter campaign page.</a></strong></p>
<p>You know what? I really do recommend you go read that page, even if don’t plan on springing for a clock, because I’m super proud of it.</p>
<p>The lo-fi old-school 2012-vibes video at the top, the beautiful industrial design in collaboration with <a href="https://approach.studio">Approach Studio</a>, the storytelling, the tiny Easter eggs that one or two people have noticed, the whole kit and caboodle.</p>
<p>It has also been a joy to collab once again with <a href="https://tomarmitage.com">Tom Armitage</a>, on the firmware and so much more besides.</p>
<hr />
<p><strong>We’re exactly 48 hours in as I post this.</strong></p>
<p>Overall I need to raise £81,300 to hit escape velocity. The project is already at 60% funded with an amazing 455 backers and just shy of £50k pledged.</p>
<p>So I’m delighted. My upper-bound goal for the first day was 30% and it reached 48%. Wow. If you’ve backed Poem/1 then thank you so much!</p>
<p>28 days to go. It’s a marathon from here on.</p>
<p>I think about this in cricket terms, like a white ball game. I’ve had a great power play, now it’s into the middle overs and time to steadily build the innings – the game can be won or lost there. Then step up through the gears as we get to the closing stages.</p>
<p>(I think about <em>everything</em> in cricket terms.)</p>
<p>I posted <a href="https://www.kickstarter.com/projects/genmon/poem-1-the-ai-poetry-clock/posts/4017397">my first Kickstarter update</a> yesterday where I also posted a pic of the clock with a serendipitous rhyme:</p>
<p><em>"Persistence is the key, no doubt. / At 1:37 PM, push through and shout!"</em></p>
<p>I mean. How does it <em>know??</em></p>
<hr />
<p><a href="https://interconnected.org/home/2024/01/25/media">I updated the press tracking page.</a></p>
<p>A couple highlights:</p>
<ul>
<li><a href="https://www.fastcompany.com/91015583/this-whimsical-clock-is-the-playful-gadget-ai-needs-right-now">This whimsical clock is the playful gadget AI needs right now</a> <em>(Fast Company)</em> is the story behind Poem/1 and was published to coincide with the campaign launch. It’s a great piece, really well balanced and full of detail, and I’m so appreciative of FastCo’s support.</li>
<li><a href="https://jwp.news/013-matt-webb-poem-1/">013 Matt Webb - Poem/1</a> <em>(Journey With Purpose podcast).</em> Randy Plemel invited me to have a conversation about the design process, and it’s alternately serious and very silly. It was a <em>ton</em> of fun to record and you should totally listen and subscribe.</li>
</ul>
<hr />
<p>I have a ton to say! I am learning at a million mph. Another time for all that.</p>
<p>Anyway. Back the campaign if you like, and (of course!) don’t if you don’t like, we’ll still be friends.</p>
<p>Please do spread the word in your slacks and discords and whatsapps if you’re happy to. An easy way to get to the Kickstarter campaign is with this say-out-loud-able short URL: <a href="https://poem.town/ks">poem.town/ks</a>.</p>
<p>Ok let’s see what the clock says right now.</p>
<p><em>"Unlock your potential, ignite the divine / It’s nine fifty-nine, time to let your light shine."</em></p>
<p>I… I feel so <em>nourished.</em></p>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2024/02/01/kickstarter</guid>
      <pubDate>Thu, 01 Feb 2024 10:02:00 +0000</pubDate>
    </item>
    <item>
      <title>Thinking about the emerging landscape of AI hardware products</title>
      <link>https://interconnected.org/home/2024/01/26/hardware</link>
      <description><![CDATA[<div>
<p>I’ve been looking at the landscape of AI hardware products. Will the future be more like voice assistants that we talk to, or more like… well, something else?</p>
<p>See, there’s been a flurry of AI hardware in consumer product.</p>
<p><strong>Assistants.</strong></p>
<p>Two products aim to be your smartphone replacement:</p>
<ul>
<li><a href="https://www.fastcompany.com/91013196/how-design-drove-10m-in-pre-orders-for-rabbit-r1-ai-hardware">Rabbit r1</a> <em>(Fast Company)</em> – a bright orange handheld with screen, a rotating camera, a big walkie talkie button, and a <s>lovely fidget device</s> scroll wheel. Rabbit OS interacts with apps on your behalf when you talk to it.</li>
<li><a href="https://www.theverge.com/2023/11/9/23953901/humane-ai-pin-launch-date-price-openai">Humane AI Pin</a> <em>(The Verge)</em> – a wearable microphone plus  world camera worn as a badge. It does whatever you ask re: what you’re looking at, and output results with a super futuristic green laser projector (that somehow also looks retro).</li>
</ul>
<p>If iPad was dismissed as a “consumption device” versus general purpose computing devices, these are both “service devices”. They’re made for ordering cabs, booking restaurants, and automating sequenceable knowledge work tasks.</p>
<p><em>(btw I am not super into the Humane AI Pin overall but I do wish my phone had a green laser projector that I could play with. So, so good.)</em></p>
<p>Two other wearables:</p>
<ul>
<li><a href="https://www.theverge.com/23922425/ray-ban-meta-smart-glasses-review">Ray-Ban Meta smart glasses</a> <em>(The Verge).</em> This product is primarily for photos and stream, but the new AI features let you ask questions about what you’re seeing: how do I get to X, how many calories in my lunch, where do I buy that bag.</li>
<li><a href="https://www.fastcompany.com/91007630/avi-schiffmanns-tab-ai-necklace-has-raised-1-9-million-to-replace-god">Tab AI</a> <em>(Fast Company)</em> is a necklace with a mic that you talk to. A chatbot: <em>"What I’m trying to do is create a new relationship in your life; radical transparency without concern of judgment. I think this is a relationship people used to have with God but is lacking in the modern world."</em></li>
</ul>
<p>I am into the ambition and experimentation here!</p>
<p>And, no, <em>“AI hardware”</em> is not a product category, in the same way that voice assistants like Amazon Echo aren’t really a category. You don’t buy them to be a voice assistant, you buy them to be a kitchen timer or to play music or whatever. A “smart speaker” is a speaker.</p>
<p>Yet these are all assistants in one way or another. Playing with the form factor or the way it fits into your life.</p>
<p>So that’s potentially one end of the AI hardware spectrum.</p>
<hr />
<p><strong>Non-assistants.</strong></p>
<p>Then there is AI hardware <em>without</em> any kind of assistant.</p>
<p>Where the AI enables some other feature. The AI isn’t on the surface as the user interface, it’s deep inside, embedded.</p>
<p>Ok, back to 2018:</p>
<ul>
<li><a href="https://www.theverge.com/2018/2/27/17055618/google-clips-smart-camera-review">Google Clips</a> smart camera. Not point-and-click <em>(that was a camera category once upon a time)</em> but always-on. You stand this small square camera on a shelf at a party and it takes 15 photos per second… selecting and retaining only the good ones, thanks to its on-device real-time AI. Like a domesticated GoPro plus a robot photo editor, all in one.</li>
</ul>
<p>Clips didn’t do so well – it’s almost impossible to invent new categories.</p>
<p>But I’m using it to illustrate this <em>embedded AI</em> end of the spectrum. (And the fact that Google did it on-device 6 years ago shows how long they’ve been ahead with AI, even if that’s not quite so apparent today.)</p>
<hr />
<p><strong>A typology of AI hardware features.</strong></p>
<p>To tease this space out a little further, <em>assistants</em> bundle together two separate AI-enabled features: new user interfaces, and new agentive (tool using) abilities.</p>
<p>So I think we have a triangle (<a href="https://interconnected.org/home/2024/01/05/triangles">ternary diagrams have been on my mind</a>).</p>
<ul>
<li>AI-enabled user interfaces, like voice or computer vision</li>
<li>Behind-the-scenes agentive AI, like figuring out sequencing from your instructions, or using apps as tools</li>
<li>Embedded AI to enable a feature, like “interesting scene” detection in a camera</li>
</ul>
<p>You could draw a triangular landscape between these extremes. All the products I’ve mentioned could be plotted somewhere inside.</p>
<p><em>Exercise for the reader: find the gaps and invent new products like planting flowers…</em></p>
<hr />
<p><strong>Embedded AI.</strong></p>
<p>Me, I’m most interested when AI <em>isn’t</em> an assistant.</p>
<p>The argument goes like this…</p>
<p>Moore’s Law cuts both ways:</p>
<blockquote>
<p>If computers get 100 times more powerful over a decade, we can EQUIVALENTLY say that computers get: 100 times smaller; or 100 times cheaper; or 100 times more abundant.</p>
</blockquote>
<p>This is what I’ve previously called <a href="https://interconnected.org/home/2023/10/06/ubigpt">intelligence too cheap to meter</a> – and what does it mean to have GPT-4-level intelligence in any light switch, or behind every menu command in your notes app, or your cat’s collar, or in your shoes, or quietly doing its job as a <a href="https://interconnected.org/home/2023/02/07/braggoscope">software universal coupling</a> or whatever?</p>
<p>Ubiquitous, embedded AI.</p>
<p>I called it <a href="https://berglondon.com/talks/botworld/?slide=30">fractional artificial intelligence</a> back in 2012:</p>
<blockquote>
<p>We can be frivolous with mathematics, throw it around like confetti.</p>
</blockquote>
<p>So I didn’t mean “fractional” as in dumb; only dumb compared to the giant planet brains owned by Big AI. I meant… small and everywhere. </p>
<p>I had no idea in 2012 what the implications of intelligence too cheap to meter would be, and I have no idea <em>now.</em></p>
<p>But I’m interested!</p>
<hr />
<p><strong>Back to the poetry clock, of course.</strong></p>
<p>“Embedded AI” is the territory that I’m playing in with my rhyming clock.</p>
<p><em><strong>Obligatory plug:</strong> the Kickstarter pre-launch page has just opened! <a href="https://www.kickstarter.com/projects/genmon/poem-1-the-ai-poetry-clock">Go register your interest in Poem/1!</a> Telling the time with a new poem every minute, composed by ChatGPT, and a gorgeous e-paper screen! You’ll get a notification as soon as the campaign opens next week.</em></p>
<p>The AI clock isn’t an assistant; it doesn’t have agentive capabilities to use tools and do general purpose problem solving. It doesn’t respond to your presence or requests or really any context at all except the time.</p>
<p>It’s an appliance.</p>
<p><em>An AI-ppliance.</em> (Sorry.)</p>
<p>For all of it being “simply an appliance,” it’s weird to be in the same room and hang out, let me tell you.</p>
<p>We are not accustomed to things like rhyming couplets emerging from a machine poet. Poems are not used <em>decoratively,</em> except made in cross-stitch and hung on the wall. And yet! Here we are!</p>
<p>I think, with the poetry clock, it’s ambiguous whether there’s AI involved at all. A human could quite possibly write a whole day of poems, one for every minute, and then display them on a loop. It’s only the sheer infinity of it that gives it away, and you only really appreciate <em>that,</em> deep down, after living with it.</p>
<p>It’s sort of human (but the words aren’t as good as a human poet would write), sort of alien (it has inhuman endurance).</p>
<p>I think there will be a lot of this.</p>
<p>Insane AI, planetary compute, used for really, really mundane things.</p>
<hr />
<p><strong>Sharing our planet with machine entities.</strong></p>
<p>There’s a great interview with Stanley Kubrick about the movie <em>2001: A Space Odyssey</em> (<a href="https://interconnected.org/home/2014/11/12/filtered">previous discussed in 2014</a>).</p>
<blockquote cite="http://www.visual-memory.co.uk/amk/doc/0069.html" class="quoteback" data-author="Joseph Gelmis" data-title="An Interview with Stanley Kubrick (1969)">
<p>One of the things we were trying to convey in this part of the film is <u>the reality of a world populated - as ours soon will be - by machine entities who have as much, or more, intelligence as human beings</u>, and who have the same emotional potentialities in their personalities as human beings.</p>
<footer>– Joseph Gelmis, <cite><a href="http://www.visual-memory.co.uk/amk/doc/0069.html">An Interview with Stanley Kubrick (1969)</a></cite></footer>
</blockquote>
<p>And:</p>
<blockquote>
<p>We wanted to stimulate people to think what it would be like to share a planet with such creatures.</p>
</blockquote>
<p>YES!</p>
<p>BUT!</p>
<p>I wonder whether the reality of a world populated with AI is not so much about listening, watching, speaking, laser-projecting entities, assistants in our pockets and hanging on necklaces and our every word - not JARVIS or HAL 9000 or Samantha or Joshua - but instead a trillion extremely mundane, genius-level, nameless embedded intelligences, squirrelling away, hidden inside everything?</p>
<p>And how will that work, practically? How will that technology be developed, managed, maintained, secured, networked, owned, shared and made equitable?</p>
<p>And how will it feel to live there?</p>

	<hr />
	<p><small>More posts tagged:
	
	<a href="https://interconnected.org/home/tagged/gpt-3">gpt-3</a>
	(25), 
	
	<a href="https://interconnected.org/home/tagged/that-ai-clock-and-so-on">that-ai-clock-and-so-on</a>
	(7).
	
	</small></p>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2024/01/26/hardware</guid>
      <pubDate>Fri, 26 Jan 2024 18:16:00 +0000</pubDate>
    </item>
    <item>
      <title>Press for Poem/1</title>
      <link>https://interconnected.org/home/2024/01/25/media</link>
      <description><![CDATA[<div>
<p>I’m using this post to track press and media for <strong>Poem/1,</strong> my AI rhyming clock.</p>
<p>Although there hasn’t been any! Only for the prototype. So I’m listing all of last year’s press, and then I’ll update this post in the future if required. (When required! When required!)</p>
<p>In case you missed my latest shilling, I’m manufacturing this ridiculous clock, Kickstarter gods be willing:</p>
<ul>
<li>Check out the <a href="https://aiclock.substack.com/p/update-6-industrial-design-first">industrial design first look</a>. I shared this with 1.5k newsletter subscribers just yesterday.</li>
<li>The Kickstarter pre-launch page is now open! <a href="https://www.kickstarter.com/projects/genmon/poem-1-the-ai-poetry-clock/">Do register your interest in the upcoming campaign over there</a> – <em>218 followers right now.</em></li>
</ul>
<p>That industrial design update also includes my convoluted theories on <em>AI green…</em> <em>"Have you noticed that the canonical colour of AI is green?"</em> i.e. the USB-C cable in the box is green. Folks this is what we call Design.</p>
<h3>Media for the prototype</h3>
<p>Oldest first.</p>
<p><strong><a href="https://www.theverge.com/2023/3/17/23644625/this-clock-uses-chatgpt-to-rhyme-and-also-help-relay-the-time">This clock uses ChatGPT to rhyme / and also help relay the time</a></strong>
<br /><em>The Verge</em> (17 Mar 2023)</p>
<p>Short piece, same day as my <a href="https://x.com/genmon/status/1636698753007603713?s=20">original tweet</a>.</p>
<blockquote>
<p>It’s the creation of Matt Webb, who shared it on Twitter. We love it.</p>
</blockquote>
<p><strong><a href="https://www.theverge.com/23669343/ai-clock-chatgpt-poems-rhymes-diy-project">This AI clock uses ChatGPT to generate tiny poems that tell the time</a></strong>
<br /><em>The Verge</em> (4 Apr 2023)</p>
<p>Long feature with interview.</p>
<blockquote>
<p>It uses ChatGPT to create a short two-line rhyme that also tells the time for every minute of the day. It’s incredible and we want one.</p>
</blockquote>
<p>Also a turn of phrase from me:</p>
<blockquote>
<p>“Clockwork means you get precision drift; AI-work means you get hallucination drift.”</p>
</blockquote>
<p><strong><a href="https://www.nytimes.com/interactive/2023/04/14/upshot/up-ai-uses.html">35 Ways Real People Are Using A.I. Right Now</a></strong>
<br /><em>The New York Times</em> (13 Apr 2023)</p>
<p>Prominent mention in longer article.</p>
<blockquote>
<p>11. Build a clock that gives you a new poem every minute.</p>
<p>“Yes, programmatic A.I. is useful,” he said. “But more than that, it’s enormous fun.”</p>
</blockquote>
<p><strong><a href="https://www.hindustantimes.com/trending/man-claims-his-ai-clock-generates-a-new-poem-every-minute-using-chatgpt-101681296561038.html">Man claims his AI clock generates a new poem every minute using ChatGPT</a></strong>
<br /><em>Hindustan Times</em> (12 Apr 2023)</p>
<p>Delightfully sceptical.</p>
<blockquote>
<p>The man took to Twitter to share a post claiming that his AI-powered clock generates a poem every minute using ChatGPT.</p>
<p>A man’s post about creating an AI-based clock that uses ChatGPT to generate poems has gone viral. …</p>
</blockquote>
<p><strong><a href="https://www.ndtv.com/offbeat/man-develops-ai-clock-that-generates-a-new-poem-every-minute-using-chatgpt-3950670">Man Develops AI Clock That Generates A New Poem Every Minute Using ChatGPT</a></strong>
<br /><em>NDTV</em> (15 Apr 2023)</p>
<p>Cites <em>The Verge.</em></p>
<blockquote>
<p>Now, a man has created an AI clock that uses ChatGPT to create tiny poems to tell time.</p>
</blockquote>
<p><strong>Funny Old World</strong>
<br /><em>Private Eye</em> (no. 1597, 5 May 2023)</p>
<p>Print only. Reproduces the NDTV story as sent in by a reader. (<a href="https://www.instagram.com/p/Cr0giGotjJQ/">I posted it on Insta</a>, appearing in the <em>Eye</em> is a career high.)</p>
<blockquote>
<p>SPOTTED a bizarre but true news story from your corner of the globe? … lb20 paid for all entries used.</p>
</blockquote>
<p><strong><a href="https://www.youtube.com/watch?v=p9Q5a1Vn-Hk">Inside OpenAI, the Architect of ChatGPT, featuring Mira Murati</a></strong>
<br /><em>Bloomberg Originals</em> (16 Jun 2023)</p>
<p>Appears in video interview with Mira Murati, OpenAI CTO, in <em>The Circuit with Emily Chang</em> on YouTube (3m05s).</p>
<p>My clock tweet is the first illustration for this first question:</p>
<blockquote>
<p>Chang: Did that surprise you? I mean what you your reaction to the world’s reaction?</p>
<p>Murati: We were surprised by how much it captured the imaginations of the general public and how much people just loved spending time talking to this AI system and interaction with it.</p>
</blockquote>
<p>Thanks all!</p>
<hr />
<h3>Media for Poem/1</h3>
<p><s>None. Let’s be hopeful. None <em>yet!</em></s></p>
<p><strong><a href="https://www.fastcompany.com/91015583/this-whimsical-clock-is-the-playful-gadget-ai-needs-right-now">This whimsical clock is the playful gadget AI needs right now</a></strong>
<br /><em>Fast Company</em> (30 Jan 2024)</p>
<p>Great piece by long-time critical friend of the design and technology world Mark Wilson, a vital role, who has been watching the emerging AI hardware landscape closer than anyone else I follow. It covers the story and design decisions behind Poem/1.</p>
<blockquote>
<p>The Poem/1 clock dreams up a new poem every minute to tell you the time. Do you need it? No. But you might want it.</p>
</blockquote>
<p><strong><a href="https://jwp.news/013-matt-webb-poem-1/">013 Matt Webb - Poem/1</a></strong>
<br /><em>Journey With Purpose podcast</em> (30 Jan 2024)</p>
<p>I had a TON of fun in this pretty irreverent conversation with Randy Plemel… which also gets into some serious points about design process.</p>
<blockquote>
<p>I’m making this gag clock. Which talks in ridiculous poems that sounds like a tiny, tiny Sam Altman telling me to go for it. And I’m using planetary compute to do it. And I love the absurdity.</p>
</blockquote>
<p><strong><a href="https://arstechnica.com/information-technology/2024/01/rhyming-ai-powered-clock-sometimes-lies-about-the-time-makes-up-words/">Rhyming AI-powered clock sometimes lies about the time, makes up words</a></strong>
<br /><em>Ars Technica</em> (30 Jan 2024)</p>
<p>Zooms in on the charming (but risky) aspect of a clock that may hallucinate the time. (This is rare now as I answer in the <a href="https://www.kickstarter.com/projects/genmon/poem-1-the-ai-poetry-clock/faqs">Kickstarter campaign FAQ</a>.)</p>
<blockquote>
<p>Poem/1 Kickstarter seeks $103K for fun ChatGPT-fed clock that may hallucinate the time.</p>
</blockquote>
<p><strong><a href="https://www.forbes.com/sites/lesliekatz/2024/02/02/its-noon-you-loon-ai-powered-clock-tells-time-with-poems-written-by-chatgpt/">It’s Noon You Loon: AI-Powered Clock Tells Time With Poems Written By ChatGPT</a></strong>
<br /><em>Forbes</em> (2 Feb 2024)</p>
<p>Pleasant, informative piece based on an interview so it has some extra detail.</p>
<blockquote>
<p>“We have a machine-poet velocity of 0.5 million poems/year,” Webb joked over email. “There’s a new unit of measurement for you.”</p>
</blockquote>
<p><strong><a href="https://medium.com/@fosta/overpromising-and-stumbling-bambis-c4139eb43291">Overpromising and Stumbling Bambis</a></strong>
<br /><em>Nick Foster</em> (14 Feb 2024)</p>
<p>Foster is former Head of Design at Google X. He says that tech companies <em>"position their products not only as new ideas but as culturally important moments, ruptures in the status quo or accelerations of our species."</em></p>
<p>Instead Poem/1 is</p>
<blockquote>
<p>a bit of new thinking escaping in the form of a product.</p>
</blockquote>
<p>Some more:</p>
<ul>
<li><a href="https://decrypt.co/214865/ambient-computing-ai-clock-generates-a-new-poem-every-minute">Tick Tock: Get A New Poem Every Minute From This AI Clock</a>, <em>Decrypt</em> (30 Jan 2024)</li>
<li><a href="https://tldr.tech/ai">TLDR AI newsletter</a> (31 Jan 2024), linked to the FastCo article</li>
<li><a href="https://www.kickstarter.com/newsletters/invent">Kickstarter Invent</a> (31 Jan 2024), top feature in the design and tech newsletter</li>
<li><a href="https://futurism.com/the-byte/ai-clock-sometimes-hallucinates">AI-powered clock sometimes hallucinates the wrong time</a> (4 Feb 2024), also syndicated to Yahoo News</li>
<li><a href="https://www.ben-evans.com/newsletter/">Benedict Evans’ newsletter</a> (6 Feb 2024), “Outside interests” in edition no. 526</li>
</ul>
<hr />
<p>I am so grateful for any coverage. Especially when the Kickstarter campaign launches next week, exact date TBA. It’ll be a marathon I’m sure. I am available for podcasts, opportunist soundbites, breakfast TV, internal talks and marriages and garden parties.</p>

	<hr />
	<p><small>More posts tagged:
	
	<a href="https://interconnected.org/home/tagged/that-ai-clock-and-so-on">that-ai-clock-and-so-on</a>
	(7).
	
	</small></p>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2024/01/25/media</guid>
      <pubDate>Thu, 25 Jan 2024 15:45:00 +0000</pubDate>
    </item>
    <item>
      <title>Ok it’s happening, my AI clock is happening</title>
      <link>https://interconnected.org/home/2024/01/23/clock</link>
      <description><![CDATA[<div>
<p>Hey so last year I made an AI clock for my bookshelves. It tells the time with a new poem every minute composed by ChatGPT.</p>
<p>Yeah so my clock ended up featured in the New York Times. And The Verge. <a href="https://nitter.net/genmon/status/1636698753007603713">The tweet got to almost 6k likes</a> with 845k views. Tons of photos there.</p>
<p>WELL.</p>
<p>The prototype clock lives in my kitchen. It tells me the time; I keep an eye on it. Here’s a poem from the other evening. Just a coincidence. (I hope?)</p>
<blockquote>
<p>In the kitchen, knives at hand / Eight o’clock eight, a gourmet night planned.</p>
</blockquote>
<p>The screen doesn’t glow. It has a handsome e-paper screen with crisp type.</p>
<p>Mostly it’s simply… poetic. This is what I got on my way out of the house today:</p>
<blockquote>
<p>In shadows deep, before the light / 7:19 haunts, the mornings first sight.</p>
</blockquote>
<p>It is sometimes profound! And sometimes really dumb! Then sometimes weird stuff comes up and I take a pic.</p>
<p>Like, where did this even come from?</p>
<blockquote>
<p>In the desert, where the dunes are vast, / Six forty-two, the sands hold memories of the past.</p>
</blockquote>
<p><em>Now.</em></p>
<p>I am someone who will take a gag far far too far and far far FAR too seriously.</p>
<p><strong>So I’m manufacturing this thing in China.</strong> For real life.</p>
<p>The Kickstarter is IMMINENT. <em>Like, next week.</em></p>
<p>My AI clock is now called <strong>Poem/1.</strong></p>
<p>But nobody know what it looks like.</p>
<p>Yet.</p>
<p>I’ve been working with the London industrial design studio <a href="https://approach.studio">Approach</a>. Their client list includes Nothing, Google and Logitech.</p>
<p>I’m going to show you what we’ve come up with, what I’m going to market with, for the first time…</p>
<p>…tomorrow, Weds 24th.</p>
<p>On the mailing list.</p>
<p><em>(I understand this is what is called a “drop.”)</em></p>
<p>Gorgeous renders, clever industrial design details, beautiful e-paper screen and all.</p>
<p>So!</p>
<p><strong>If you want to be the first to see what Poem/1 looks like, then <a href="https://aiclock.substack.com">subscribe here</a>.</strong></p>
<p>p.s. here’s a <a href="https://www.instagram.com/p/C2b8vJStXhD/?utm_source=ig_web_copy_link">sunrise to sunset time lapse</a> of my un-designed prototype. Poems flickering by.</p>
<hr />
<p><em><strong>UPDATE 24 Jan:</strong></em></p>
<p><a href="https://aiclock.substack.com/p/update-6-industrial-design-first">Industrial design first look is here.</a> You’re going to love it.</p>

	<hr />
	<p><small>More posts tagged:
	
	<a href="https://interconnected.org/home/tagged/that-ai-clock-and-so-on">that-ai-clock-and-so-on</a>
	(7).
	
	</small></p>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2024/01/23/clock</guid>
      <pubDate>Tue, 23 Jan 2024 08:46:00 +0000</pubDate>
    </item>
    <item>
      <title>Acts Not Facts #8: clock news, client news, AI, AI, AI, and plans</title>
      <link>https://interconnected.org/home/2024/01/19/anf</link>
      <description><![CDATA[<div>
<p>Happy new year!</p>
<p>Some years I see how long I can feasibly say Happy New Year to people. My record is March. But this year 2023 already feels like months and months ago.</p>
<p>It’s the first Acts Not Facts update of the year! Weekly is too frequent. But I’ll write notes periodically and there’s a lot to cover today.</p>
<h3>Countdown to Kickstarter</h3>
<p>My AI clock is now named <strong>Poem/1</strong> and - BIG NEWS - Kickstarter gave the green light to the campaign yesterday. So now I’m getting the last few things lined up before hitting that <em>Launch</em> button.</p>
<p><a href="https://aiclock.substack.com">More news over at the AI clock newsletter.</a> tl;dr,</p>
<ul>
<li>I’ll do a reveal on the industrial design next week</li>
<li>The Kickstarter campaign will launch probably the week after…</li>
<li>…but I want to align it with some press. <strong>So if you’re a journalist please get in touch.</strong></li>
</ul>
<p>Oh I missed this media first time around: Bloomberg interviewed Mira Murati, OpenAI’s CTO, and used my clock as the first example. Watch <a href="https://www.youtube.com/watch?v=p9Q5a1Vn-Hk">Inside OpenAI by Bloomberg Originals</a> <em>(YouTube, at 3m7s).</em></p>
<h3>GOV.UK is working with AI to improve the interactions people have with government</h3>
<p>That client I haven’t been able to name? It’s GOV.UK, the part of UK gov that looks after digital information and services.</p>
<p>They started experimenting with AI really early, and built and tested a chat UI for the 700,000 pages of information that they look after. <a href="https://insidegovuk.blog.gov.uk/2024/01/18/the-findings-of-our-first-generative-ai-experiment-gov-uk-chat/">The GOV.UK Chat research findings have now been published.</a> It’s been amazing to watch. There are some unique challenges.</p>
<p>Personally I’ve been helping out with <em>“what next”…</em> how should GOV.UK systematically explore AI to build capability and open the imagination, and what is the strategic “why” here? Well, eventually to help transform how people interact with government, sure, but there are stepping stones to be chosen.</p>
<p><a href="https://insidegovuk.blog.gov.uk/2024/01/18/experimenting-with-how-generative-ai-could-help-gov-uk-users/">The new AI Team is announced here</a> by Chris Bellamy, Director of GOV.UK. I’ve been bringing a perspective of design pathfinding, one that I <a href="https://www.actsnotfacts.com/made/large-language-models">first talked about with the BMJ back in May</a> and then <a href="https://interconnected.org/home/2023/12/08/ai-pathfinding">wrote up here in more detail</a> <em>(Dec 2023).</em></p>
<p>Plus some heavy advocacy for thinking through making, alongside the research…</p>
<p>More to say about all of that another time I’m sure. It’s a privilege working with this smart and motivated team.</p>
<h3>Building at PartyKit</h3>
<p>My mainline client continues to be PartyKit, where I invent in order to stretch and explore their new platform for the realtime, multiplayer internet.</p>
<p>Just before the holidays PartyKit shipped AI integrations, and I wrote a long piece on the blog:</p>
<blockquote cite="https://blog.partykit.io/posts/using-vectorize-to-build-search/" class="quoteback" data-author="PartyKit blog" data-title="Using Vectorize to build an unreasonably good search engine in 160 lines of code">
<p>The tl;dr is that search got really good suddenly and really easy to build because of AI.</p>
<p>For instance, this is the search experience I recently made for my side project website Braggoscope.</p>
<footer>– PartyKit blog, <cite><a href="https://blog.partykit.io/posts/using-vectorize-to-build-search/">Using Vectorize to build an unreasonably good search engine in 160 lines of code</a></cite></footer>
</blockquote>
<p>It’s a straightforward, show-the-code account of one of the fundamental techniques in building with AI. One reader review: <em>"Was reading the Vector DB blog and honestly I think one of the most approachable blogs I’ve seen on the topic + demo"</em> – so I’m pleased with that.</p>
<p>I really enjoyed writing it.</p>
<p>What I find hardest to communicate to people who work with technology, before they use AI, is how much they need to reset their assumptions about how hard things are. e.g. a great search engine is so <em>easy</em> now.</p>
<p>The best way to demystify is to go line-by-line. Code isn’t scary.</p>
<p>And there’s no magic here. An embedding model is just a function call, a vector database is just a function call, broadcasting messages to a multiplayer room is a function call, keeping multiplayer state is a function call. All realtime, all scalable, there’s nothing to it.</p>
<hr />
<h3>Acts Not Facts in 2024</h3>
<p>I haven’t sat down and made a year plan for Acts Not Facts, this oh-so-nascent product invention femto-studio of mine. Here’s my off the cuff <em>prompt completion</em> on the matter…</p>
<p>I would say that I’m roughly where I wanted to be, a year in. <a href="https://www.actsnotfacts.com">As the big Venn on the ANF website says</a>, I’m focused on AI, group experiences, and embodiment. At the end of 2023 I’ve built up a decent portfolio that demonstrates precisely that. Good!</p>
<p>Which means the next step is to pick up a team project. Ideally something that involves invention, AI, interactions, and hardware where I get to hire a tiny dream team to deliver.</p>
<p>Lmk if there’s a project we should talk about.</p>
<p>p.s. I have that SF/Bay Area trip <a href="https://interconnected.org/home/2024/01/11/travel">coming up w/c 5 Feb</a>. My schedule’s filling up. I’d love to squeeze in a couple more chats.</p>

	<hr />
	<p><small>More posts tagged:
	
	<a href="https://interconnected.org/home/tagged/that-ai-clock-and-so-on">that-ai-clock-and-so-on</a>
	(7), 
	
	<a href="https://interconnected.org/home/tagged/weeknotes">weeknotes</a>
	(8).
	
	</small></p>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2024/01/19/anf</guid>
      <pubDate>Fri, 19 Jan 2024 16:20:00 +0000</pubDate>
    </item>
    <item>
      <title>What is the fart app for Apple Vision Pro?</title>
      <link>https://interconnected.org/home/2024/01/16/fart</link>
      <description><![CDATA[<div>
<p>What I mean is: what’s the app that you download, makes you laugh, you show your friends, it makes <em>them</em> laugh, and it couldn’t have been done without the core technology of the platform?</p>
<p>The app that is <em>so dumb</em> but it’s costs just $1 and it makes the developer a bazillion bucks.</p>
<p>That app is the fart app.</p>
<p>You know the one I mean. An app that has a big button and you hit the button and it makes a sound of a fart and that’s it. It’s in the first 10 apps that anyone downloads.</p>
<p>I want to know what it will be for the <a href="https://apple.com/visionpro">Vision Pro</a>, Apple’s big bet on spatial computing and augmented reality, which goes on pre-order in a couple days and will be <s>in people’s hands</s> on people’s faces on 2 Feb.</p>
<p>The fart app wasn’t literally a fart app for iPhone.</p>
<p>There was an app where koi carp swam peacefully in a pond, and if you touched the screen the water would ripple and the fish would swim away.</p>
<p>Another app looked like a glass of beer and when you tilted the phone the beer would tilt too and the level would go down. Maybe there was a belch at the end?</p>
<p><em>Talking Carl</em> had a little cartoon something that repeated whatever you said only in a squeaky voice.</p>
<p>Then sound board apps to make stupid sounds.</p>
<p>These apps weren’t trivially easy to develop with the incumbent Nokia smartphones. The app platform was too cumbersome; the sensors too scarce; the screen dim and slow to respond.</p>
<p>Then we got an explosion of fart apps. So to speak.</p>
<p>But I would argue that having a “fart app” (literally or of that category) is <em>critical.</em></p>
<ul>
<li>You show your friends what your new gadget can do - those breakout iPhone apps required capacitive touch, gyroscopes, a good mic and speaker, etc - and you get to make them laugh. Showing off! But also, virality!</li>
<li>It creates understanding. Nobody can understand a new capability without trying it. A fart app shows a consumer some new complicated technology in a frame that they grok instantly.</li>
</ul>
<p>The experience is roughly: person A says to person B, oh you got that new thing. Person B says, yeah check this out. Person A tries it, gets what is unique about the thing, laughs, all within about 3 seconds.</p>
<p>So what is the fart app for Vision Pro?</p>
<hr />
<p>Maybe in the app you pretend to be Godzilla and stomp on cities.</p>
<p>Look.</p>
<p><a href="https://interconnected.org/home/2022/04/20/vr">Here are my notes from trying a Meta Quest 2 VR headset</a> <em>(Apr 2022).</em> For me the magical moments came from <em>scale.</em></p>
<p>Either:</p>
<ul>
<li>You are tiny and looking at something huge; or,</li>
<li>You are huge and looking at something tiny.</li>
</ul>
<p>Scale and height are the visceral responses available with mixed reality that you can’t get from screens. They make you gasp and make you laugh.</p>
<p>I was endlessly tickled, with my Quest 2, with a mountain that got halfway up my chest and I could kneel down to peer in the caves, and awed standing in a towering cathedral and looking up, up, up.</p>
<p>So imagine this, dear app developer:</p>
<p>Use the new <a href="https://developers.google.com/maps/documentation/tile/3d-tiles">Google Maps API with photorealistic 3D tiles</a>.</p>
<p>Display the local city on the floor in the user’s living room. Looking through the Vision Pro the tallest buildings should come up to their knees.</p>
<p>As the user walks and stomps, the buildings smash to pieces. Cartoon figures run around and cartoonishly scream.</p>
<p>Kinda macabre sure. Kinda hilarious also. Only possible with a mixed reality headset.</p>
<p>A one-shot app, that’s all it does. I think it would work.</p>
<hr />
<p>Ok I admit this isn’t an entirely new concept: I remember once hearing about a Google Maps-style VR app with “Godzilla mode.” I never tried it, I don’t know what it did. I heard people loved it.</p>
<p>I remember the idea and imagine it anyhow.</p>
<p>Anyway it’s more about scale and spatiality than stomping buildings.</p>
<p>A 3D cosmos in your home where you can grab galaxies and set them spinning, or run your hands through stars like sand – that would work too.</p>
<p>There was that breakout VR app where you walk the plank 80 storeys in the air. That touches the same nerve but is more about jump scares than laughing.</p>
<p>Walking like a giant across a tiny forest where all the trees giggle infectiously as you squish them underfoot – I’d play that and show my friends.</p>
<p>Anyway. Something to figure out, develop and ship in the next <em>checks notes</em> 2 weeks. Yeah maybe not for me though.</p>
<hr />
<p>Two points and I mean them profoundly: don’t take technology too seriously, not even your own; and, how are people going to get it, instantly, no thinking?</p>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2024/01/16/fart</guid>
      <pubDate>Tue, 16 Jan 2024 18:45:00 +0000</pubDate>
    </item>
  </channel>
</rss>
