
• Move the most obvious sources of confusion, or FUD, out of the way. Whether we agree or not on what they are, just focus on this: Human communication is hard. Add beyond that, machine to machine is …an entirely new story. Hard doesn’t begin to describe it. Sure, you might say, but what can you do with that? How is it relevant to what we’re doing here and now? That’s not a bad question, exactly, but a more timely and valuable one would be: What’s a surprisingly common thread that brings together everything we find most worth researching, building on // for, and quickly advancing here today?
Synthesis.
At the edge of where open source engineering, intuitive design, universal access to local microfactory manufacturing, composable instructional material and pattern libraries, maps adaptive to a new age where IRL & virtuality become ever more closely integrated, turnkey decentralized commerce, and every other enhancement brought about by web3 blend seamlessly together.
We’ve seen the first impressive hint in the 3 weeks since the public release of Stable Diffusion of just what machines we can communicate with beyond a breakthough level of coherence are capable of. Suddenly the art of poet-like prompt engineer — of being good, really good, at textual conversation with subtle diffusion machines — is the clearest candidate for most in demand profession of this decade and beyond. Instead of aspiring to be a euphemistic “codetalker”, we all quickly now literally talk with models made from code to produce stunning results.
It can be hard also to tell when a fast spreading story of new tools and practices are really a breakthrough that simplifies understanding and strengthens the usefulness of multiple related areas of promise, or if it’s just exhuberance for a future that is still much farther ahead.
What’s so different with image synthesis in general is how straightfoward it is for you to put it to the test and do it, or to some extent decentraliize it, yourself. Read up on the most recent colab notebooks to run, read the issues on open source repos about how to get diffusion models running on your local M1, share prompts, modifiers, weights, hits and misses from your outputs, and more in many community discords to choose from, tinker and build on it all yourself.
For the DIGITALAX ecoystem that means a continued emphasis on the integration of web3 tooling, a DIY ethos, and statement making web3 fashion through increasing use of image synthesis for every stage of collection, creation, use, manufacturing, distribution, and beyond.
Coming up we will see the release of substantial updates to;
• the flagship DIGITALAX home site,
• a DIY synthesis community research hub (DIYSynth),
• synthesis enabled decentralized social media tooling (InariSynth),
• and a detailed look at the role W3F and other ecosystem tokens perform as we advance on the path towards machine to machine synth w/ token auth & DIY compute APIs.
With synthesis, like the coming merge, the common thread is how fast and powerfully it’s all coming together.
• Wen merge?
This has been a repeat question for so long that many of us wondered if the day would every truly come. Wonder no more. Google even made a countdown clock because we’re finally here.
The ever more decentralized world will never be the same. 🎉
Only thing “controversial” about the latest update on Tornado Cash is how those responsible at Treasury thought they were justified in applying sanctions against code, infringing on freedom of speech, and spreading a chilling effect to everything and everyone caught up in this guilt by association dragnet. Looking forward to a lot more like this coordinated web3 industry response:
https://www.cnbc.com/2022/09/08/coinbase-bankrolls-suit-against-treasury-department-following-tornado-cash-sanctions.html
Not exactly a silly fail… more of an almost too perfect web3 open source dev meme…

• Docs
• FUD FAQ
• Transparency Reports
• Youtube
• Blog