The community around Stable Diffusion, the open source AI project for text-to-image generation, has been buzzing. From nonexistent a year ago to thousands of contributors and forks and spinoffs. There’s even a GUI macOS app.
Lexica is a project to index and make prompts and images from Stable Diffusion searchable. Playing around with it, it’s pretty impressive. So much incredible possibility here. This tech will make the volume of content on the internet literally infinite.
Honest postmortems are insightful to get the inside backstory on what happened behind the scenes with a company. In this one, Jason Crawford goes into what went wrong with Fieldbook before they shut it down and were acquired by Flexport a couple years ago:
Now, with a year to digest, I think this is true and was a core mistake. I vastly underestimated the resources it was going to take—in time, effort and money—to build a launchable product...
A list of broad laws that apply to all fields. Thoughtful stuff as always from Morgan Housel:
6. Parkinson’s Law: Work expands to fill the time available for its completion.
In 1955 historian Cyril Parkinson wrote in The Economist:
IT is a commonplace observation that work expands so as to fill the time available for its completion. Thus, an elderly lady of leisure can spend the entire day in writing and despatching a postcard to her niece at Bognor Regis. An hour will be spent...
The Humanitarian OpenStreetMap Team has been working on an experimental version of the Tasking Manager tool that incorporates deep learning-assisted mapping projects.
The OSM community has long been (and still largely is) averse to machine-based mapping, as it’s counter to the founding ethos of the project being “created by mappers, where they live.” But if the project is to survive and still see adoption and usage in commercial applications, there has to be effort to improve the depth of coverage and refresh rate to stay competitive with the commercial providers like...
In the era of every company trying to play in machine learning and AI technology, I thought this was a refreshing perspective on data as a defensible element of a competitive moat. There’s some good stuff here in clarifying the distinction between network effects and scale effects:
But for enterprise startups — which is where we focus — we now wonder if there’s practical evidence of data network effects at all. Moreover, we suspect that even the more straightforward data scale effect has limited...
This is a great breakdown of the different elements of LiDAR technology, looking at three broad areas: beam direction, distance measurement, and frequencies. They compare the tech of 10 different companies in the space to see how each is approaching the problem.
This is an interesting interview with Been Kim from Google Brain on developing systems for seeing how trained machines make decisions. One of the major challenges with neural network-based based deep learning systems is that the decision chain used by the AI is a black box to humans. It’s difficult (or impossible) for even the creators to figure out what factors influenced a decision, and how the AI “weighted” the inputs. What Kim is developing is a “translation” framework for giving operators better insight into the decision chain of AI:
This talk on “generative AI” was interesting. One bit stuck out to me as really thought-provoking:
Dutch designers have created a system to 3D print functional things in-place, like this bridge concept. Imagine that you can place a machine, give it a feed of raw material input and cut it loose to generate something in physical space. As the presenter mentions at the end of the talk, moving from things that are “constructed” to ones that are “grown”.
This week was Amazon’s annual re:Invent conference, where they release n + 10 new products for AWS (where n is the number of products launched at last year’s event). It’s mind-boggling how many new things they can ship each year.
SageMaker was launched last year as a platform for automating machine learning pipelines. One of the missing pieces was the ability to build training datasets with your own custom data. That’s the intent with Ground Truth. It supports building your dataset in S3 (like a group of images), creating a labeling task, and distributing it to a team to annotate...
Google has built their own custom silicon dedicated to AI processing. The power efficiency gains with these dedicated chips is estimated to have saved them from building a dozen new datacenters.
But about six years ago, as the company embraced a new form of voice recognition on Android phones, its engineers worried that this network wasn’t nearly big enough. If each of the world’s Android phones used the new Google voice search for just three minutes a day, these engineers realized, the company would need twice as many data centers.