• PROJECTS
  • BLOG
  • CONTACT
  • ABOUT
HARD WORK PARTY

HARD WORK PARTY

  • PROJECTS
  • BLOG
  • CONTACT
  • ABOUT

INTERVIEW: Noah Norman on Epoch Optimizer for Derivative

Derivative, makers of TouchDesigner, have just posted a long-form interview with Noah Norman, Chief of Hard Work Party, about Epoch Optimizer, our installation for AWS at the 2023 re:Invent conference Sustainability Summit.

This interview goes deep on the efforts to ‘rhyme’ the form and function of this piece as an exploration of the use of generative AI to envision a more sustainable alternate present.

In addition to the design and theme considerations, we discuss some of the technical aspects involved in the realization of the work, including command and control of real-time inference from TouchDesigner as a scripting and presentation layer.

Read the interview on Derivative’s site here.

Wednesday 09.18.24
Posted by CHIEF
 

GENERATIVE AI in PRODUCTION

Prototype interior images generated for a home decor client. Generated from a textual embedding derived from client-supplied training material.

As with any hot technology, we get a lot of inbound requests from prospective clients looking to deploy generative AI in production. As is often the case with these situations, we’re finding that clients’ expectations of the capabilities of the system don’t align with the reality of their behavior in production.

We saw this dynamic in the past with architectural projection mapping, which photographs well from the vantage point of the projector, but can be underwhelming when seen in person, off-axis, with stereo vision, unretouched, where the projected light is fighting with ambient spill. Trying to communicate these shortcomings in advance is often met with pushback in the form of … more photos and video, which of course look great.

Today’s expectations of the capabilities of generative AI are calibrated by exposure to edited (some might say cherry-picked) samples of LLM and text-to-image outputs. I’m not trying to make a case that these systems can’t produce excellent outputs from time to time, but rather that they only do so infrequently, and that systems that expose end users to raw outputs without a curation step in the middle are likely to under-perform expectations at best, and cause a fiasco at worst.

At issue is, in part, the tendency to center the AI-generatedness of the product in its presentation, which (perversely, IMO), raises expectations in the viewer, as they have in the past likely seen some impressive AI-generated outputs.

Contributing to this is the spectacle that accompanies marketing initiatives, further raising expectations of the output quality.

As with most distributions, of course, the best stuff - the stuff that gets passed around - is under the top tail of the bell curve, and the majority - say, 85% - of the outputs, strike the viewer as anywhere from less-great-than-other-AI-stuff-they’ve-seen all the way down to objectively awful.

Empirical Rule by Melikamp, modified by me, CC BY-SA 4.0

Especially in the world of AI-generated images, the average viewer’s level is often set by the impressive albeit all-too-recognizable and HDR’d-all-to-hell output of Midjourney. Midjourney makes aesthetically impactful images with an incredible hit rate (very few duds, munged images, deformed bodies, etc), but Midjourney is made of secret sauce and doesn’t have an API at time of this writing. That means its influence on AI image-making in real production acts exclusively to raise the bar in a way that leads to unrealistic expectations.

All of this is to say, at the moment, if you want to present the top X% of AI-generated output, you have to design your experience / system / product to accommodate an editing or curation step that removes the other (100-X)% - curation done either by the end user or by a wizard behind the curtain.

When clients push back on this idea I find myself explaining that, in service of the (often short-lived, often promotional) project they want to undertake, they’re proposing that we improve on the the state of the art products of VC-backed giants Stability and OpenAI.

This task raises a semi-rhetorical question I sometimes ask people with ambitious ideas for small projects — ‘given how much it will take to do that, it will be pretty valuable if we pull it off. Is this something you want to spin off into a startup?’

With that said, there are strategies for hiding curation and for improving the hit rate of these generative systems, and even turning the constraint of a low hit rate into a feature that contributes to an experience.

We’ve learned a lot in the last 2 years about deploying generative AI in production, and below I’ll use three recent projects to discuss how we’ve approached this issue.


HIDDEN HUMAN in the LOOP: ENTROPY

Entropy was a piece I made with Chuck Reina beginning in 2021, using then-SOTA VQGAN+CLIP and GPT-3 in a feedback loop to create what presents as a (unhinged) collectible card game.

We truthfully presented the work as the result of repeatedly prompting these two AI systems with the previous-generations’ outputs, but what we didn’t highlight was that the process involved an EXTREMELY LABOR-INTENSIVE curation step.

I personally selected among four options for every field on every card before generating the next generation, which doesn’t sound that bad until you realize that 3,000 cards * 7 fields * 4 options = 84,000 curation steps, not including the first two times I generated the first 10 generations of the project before restarting for various reasons.

To get this done, Chuck built a web-based, keyboard-driven review tool that I used for literally 50 days straight until I had carpal tunnel in both hands.

Had we not included this hidden hand, the recursive AI outputs would have quickly devolved into sophomoric jokes, pop culture references, or regurgitation of ideas from other places.

Many of you never used GPT-3 - in fact, many of you never heard of it, for good reason. It was less good than the GPTs we use today. Similarly, VQGAN+CLIP, while revolutionary, was the last of the GANs used in any major way. Diffusion models are a big step up.

If you want to read more about the process and learnings that came from this project, check out this thoughtful writeup/interview in nftnow.

CURATION as EXPERIENCE: CHEESE

Just a small handful of the stuff on my Cheese profile page, which I encourage you to peruse.

The first time I saw Dreambooth in action it was like being hit by a bolt of lightning. I instantly envisioned a social network that resembled Instagram but was entirely made of AI-generated images.

I immediately teamed up with a crew to make a prototype: a social network called Cheese. Users entered prompts with the keyword ‘me’ to identify themselves, posted what they liked, and hit ‘do me’ on public images of others to get a remixed version of that prompt with themselves in it.

It was, to put it mildly, addictive, sticky, hilarious, and gonzo.

What it wasn’t was magic. It produced plenty of deformed, unrecognizable, blurry, munged, off-prompt, confusing, upsetting, and disappointing images.

But Cheese was designed to be a paid service - users buy tokens, and they spend those tokens to generate - effectively to purchase - images. From interviewing users, I identified early on that one of our most important KPIs, aside from how many users we got through onboarding, or MAUs, or CAC, or ROAS, or whatever, was going to be our hit rate.

That number - what percentage of the images generated are images that a user likes - that they would post, share, download, etc. - was a major contributor to the average user’s perception of the quality of the experience, and we found that users’ initial expectations of what that number should be varied wildly, but was generally higher than we could satisfy.

We expended a ton of effort, especially for a prototype (unsurprising if you know us), in bringing that hit rate up. We ran loads of experiments on training, data preparation, inference, prompt injection, and onboarding, and internally discussed dozens of additional approaches as we saw them bubble up in papers. Some of them moved the needle, but we saw (rightly, as time has borne out) that publicly-available approaches in the space wouldn’t do much to change the balance in the coming year-plus.

With that in mind, we did what we always do: we embraced the constraints. We designed the Cheese UX with user curation at the center of the loop.

We designed a system where each prompt generated 9 variations that were only visible to the user in a private ‘camera roll’, from which they could choose to make any, or none, of the images public, only then posting them to their profile and making them visible in the public feed.

Because the users weren’t using a real camera but rather entering a prompt or hitting ‘do me’ and waiting for the results to compute, the private tab became more like a slot machine - a place our hooked users repeatedly refreshed, anticipating the dopamine hit from the variable reward of a batch of unfathomably weird, surprising, and hilarious images.

We found that this dynamic evoked how users behaved in other media, where some posted loads of their outputs, some edited heavily and posted only the very best, and some posted almost never and rather used their generations for sending privately to friends.

If we could, would we engineer a system where 100% of the images produced were ‘hits’? Absolutely. Failing that, our users were very happy with where we arrived, putting the control in their hands and making curation part of the experience.


CONSTRAIN and BATCH: INTERIORS

Some prototype images from an ongoing project to generate interior images in a very specific style.
These 9 were cherry-picked, you could say, out of a batch of 64.

We’re currently engaged an ongoing project helping a client in the decor space produce photo-realistic AI images with a very particular aesthetic. They’re attempting to use AI to reduce their costs in staging and photographing product for small-screen promotional efforts, eg in social media and in digital advertising.

Because this client has been producing photos and renderings in this style for years, the surfeit of training material meant fine-tuning of some sort was an option for capturing their look. Through some experimentation, we found a textual inversion, rather than LoRA or Dreambooth, to provide an effective and easy-to-use way to constrain generations to the style of the training material.

With that said, even a fine-tuned Stable Diffusion has issues with counting, lighting, and realism - issues exacerbated by the training material, which included loads of unconventional furnishings like amorphous couches, improbably cantilevered chairs, and side tables of indeterminate shape and material. The fine-tuned output, while instantly on-vibe, still had a low hit rate and still produced stools with 2 legs, confusing shadows, and side tables with too many surfaces.

To counteract this tendency, we found using ControlNet to be an effective way to ‘lock’ the composition of an image and iterate on its contents in a way that produces variations within a narrow range of blocking options but with different mise en scene.

That approach, combined with plain old batching - generating dozens, hundreds, or thousands of images at a go for human curation before presentation - resulted in a system that, while still batting something like .130, produces loads of high-quality, usable images at once for minimal human effort, a huge win for the client over the system they were using before, which involved unsustainable cost and supervision for the nebulous ROI of social media grist.

LASTLY

The last trick up our sleeve - compositing / collage / outpainting. De-emphasizing the importance of any given image by including it in a larger composition can make incoherent individual generations acceptable as a contribution to a surreal gestalt. Just ask Hieronymus Bosch!

We deployed this technique to great effect in the continuous-transformation inpainting process employed in Epoch Optimizer, our durational generative installation for AWS at their re:Invent conference in ‘24. Check out the case study at that link, or read our in-depth interview on the making of Epoch Optimizer with TouchDesigner makers Derivative at their site.

aws_reinvent_hwp_comp_v01.jpg
reinvent_mine_gif_01_v01.gif
reinvent_bus_gif_04_v01.gif
reinvent_greywater_gif_01_v01.gif
AWS / Epoch Optimizer direct capture [2023]
reinvent_plate_gif_02_v01.gif
reinvent_car_gif_v06.gif
reinvent_greenway_gif_03_v01.gif
plate_euler_00018_.png
ComfyUI_00214_.png
office_end_dpm_00005_.png
coastal_euler_00004_.png
cargo_euler_00021_.png
car_end_dpm_00026_.png
AWS Epoch Optimizer: Source Montage
aws_reinvent_hwp_comp_v01.jpg reinvent_mine_gif_01_v01.gif reinvent_bus_gif_04_v01.gif reinvent_greywater_gif_01_v01.gif
AWS / Epoch Optimizer direct capture [2023]
reinvent_plate_gif_02_v01.gif reinvent_car_gif_v06.gif reinvent_greenway_gif_03_v01.gif plate_euler_00018_.png ComfyUI_00214_.png office_end_dpm_00005_.png coastal_euler_00004_.png cargo_euler_00021_.png car_end_dpm_00026_.png
AWS Epoch Optimizer: Source Montage

The Garden of Earthly Delights, Hieronymus Bosch, c1500

Want to explore using generative AI in production? YOU KNOW WHERE TO FIND US! (here)

Thursday 08.17.23
Posted by CHIEF
 

Mass Interaction Design: One to Many to One

How we engineered the user experience design of Elsewhere Sound Space to maximize interaction opportunities for a crowd of thousands while still running a halfway-coherent show.

When a crowd of 10,000 people yell at the same time, does it matter what any of them say?

How can an audience that big contribute to a conversation?

What does it mean to be 'interactive' when the group pushing the buttons are so many?

As creators of both physical installations and web-based experiences, we're used to thinking about either interaction for meatspace users in the dozens, or realtime, often streaming, software experiences that reach viewers in their tens of thousands ... but the idea of combining the two in an interactive experience for a mass audience was new territory.

The context in which we were thinking about this was 2021, mid-pandemic, when we were working with Twitch and beloved Brooklyn music institution Elsewhere to find ways for homebound audiences, hungry for the community and togetherness of live music, to connect.

At the time, virtual production-powered concerts were everywhere; virtual festivals were taking place in browser windows, well-known musicians were represented in machinima extravaganzas in Fortnite, and even simple livestreams felt especially poignant and special.

But none of these approaches offered a real sense that the show you were watching was different because you - one specific, dedicated fan - were there. It's not really possible to feel seen in a traditional one-to-many streaming audience. That’s a broadcast.

And while meaningful interaction could be a great thing to offer a huge crowd, if every fan gets to come up and touch Rod Stewart's hair, he's either not going to get a lot of singing done or there can't be all that many fans doing the touching.

If we tap the glass, we want the fish to look at us, then go back to doing fish stuff.

It was from this perspective that we began thinking about what's possible with bidirectional communication in a live performance when the show is led by its tech team.

[The result was Elsewhere Sound Space <— check that link if you just want to read about the show - this post is about the interaction design behind the show.]

The Disruption Pricing Principle: A Framework for Mass Interaction

This chart and the axes it posits are the main idea behind this post. Don’t worry about the specific interactions plotted here - they’re explained below. Just pay attention to the axes and the implications of the various regions of the space described.

The framework we came up with plots viewer-initiated interactions on axes from 'cheap' (easy to trigger) to 'expensive', and on the other axis, from unobtrusive to disruptive.

The theory is simple: viewers get a thrill from being a part of the show, and that thrill is proportional to how much influence their action has on the events taking place onscreen. Constraining this is the idea that the amount of this excitement available to viewers is finite - the more presence in aggregate viewers have in the developments onscreen, the more the show becomes about viewer involvement, and the less value any viewer interaction has.

Shows that index heavily in this direction can be hilarious and thrilling and groundbreaking in their own right, but they feel more like a video game, where the audience is almost puppeting the talent onscreen, and in that context, getting a response from interaction is so expected as to feel almost mundane once you get used to it. Keeping this tension in balance — optimizing the utility of viewer interactions while still having the show be about something else — is the mechanic we’re investigating.

In order to ‘price’ interactions appropriately, actions that are especially disruptive to the show should be challenging to effect, whereas actions that have little to no effect on the show shouldn't take a lot of effort or money. Things that live outside of those two quadrants of the chart will either be too easy to abuse and thus frequently disruptive to the show (cheap and disruptive - eg hecklers) or disappointing for the viewer (expensive and underwhelming - eg VIP tickets for a 1-second meet and greet).

We should clarify here that while 'disruptive' is generally a negative term, in our case the show was intended to be a gonzo experience thematically, aesthetically, and technically, and show-derailing interactions from the audience, while disruptive to the furthering of the gonzo plot, can be hugely fun and are crucial in fostering the giddy feeling that the onscreen talent might look into the camera and speak directly to you at any moment.

That’s that ‘tap the fishbowl’ feeling we were after, but the effect diminishes with the frequency with which it’s seen.

We want the fourth wall there so when we break it, it counts.

In the interest of making it so as many audience members as possible can have a hand in initiating these kind of rewarding interactions, we looked to offer a blend of options that felt to viewers like they could participate in any fashion from casually, as a crowd member, to aggressively, as a super-fan, much as they would in a rowdy live audience.

Let's look at some of the interactions we built, starting small and working our way up:

CHEAP and UNOBTRUSIVE

Chat Interactions

On Twitch, 'the chat' is a character in every show, and it's the main channel through which audiences and on-screen talent interact. It's become a convention in some of the more graphics-heavy Twitch shows to put the actual chat onscreen in order to provide a diegetic explanation for the talent's sightline when reading from an in-studio monitor.

The chat was sometimes literally onscreen.

It's also an easy place to sprinkle in opportunities for viewers to drive unobtrusive interactions, including automated responses to user keyword commands, whether in the open channel or as a DM. We offered a few chat-based bang commands, like '!scripture', which viewers could use to dredge up bits of lore, backstory, and color writing from the enormous database to which we were constantly adding.

Having the voice of the show reinforcing tone in the chat, seemingly in dialogue with viewers, provided high ROI for an ongoing process of populating the response databases with tidbits as they occurred to us.

Time-Limited, Binary Voting
(choose your own adventure)

As a visually rich virtual production improv show, the settings in which we placed our talent were key to driving story and conversation forward — in reality, our talent were standing in separate rooms facing a camera and a whole bunch of monitors, one of which was showing the composited scene, one the chat. To that end, our hosts would often put the choice of a change of scenery to the crowd, triggering a timed vote in which the majority won.

This is our first example in a category of interactions designed to let a large group of people each individually participate in a collective action that has more weight in the proceedings of the show than we'd want any single viewer to have.

Viewers chose the next location for the action.

This interaction was also used to allow the audience to weigh in on subjective decisions that directly affected the plot. In the case of the image below, it was to determine the winner of a key debate for Space President:

A vote to determine the winner of a debate between the challenger and incumbent in the race for Space President.

Onscreen Easter Eggs

Just seeing your name on screen can be rewarding, so we looked for easy ways to make that happen whenever possible. In the case of the Mud Bath scene, viewers who looked around in the “!menu” command or who saw other users doing it would notice that they could type “!steam” in the chat, releasing a burst of steam with their name spelled out in it and raising the temperature on the onscreen thermometer, which slowly fell between bursts. Their username name would rise with the steam and collide with the ceiling and walls of the bath before dissipating.

The rate of this interaction was gated by the temperature in the room. Once over a threshold of some 330K (about 135F), viewers who tried to use this interaction were told by the chat responder-bot that they needed to let the room cool a bit before adding more steam.

Usernames emitting from steam bursts triggered by those users.

Judged Submissions

Each episode included a long-running interaction, promoted at intervals via onscreen text, reminders in the chat, and occasionally by the talent themselves, asking viewers to come up with a new name for the guest in the event that they were indoctrinated into the cult at the end of the episode - another thing that was put to a vote.

Names entered this way (purchased with 'channel points' — more on this below) went into a database that was reviewed off-camera by the show crew, who manually picked a winner for the host to announce at the end of the episode, along with acknowledgement of the viewer who suggested it.

A viewer is acknowledged for contributing the winning cult name for the guest, as judged by the offscreen crew.

Dynamic Credits

We took advantage of the ability to dynamically generate the credits just before they went onscreen to make sure every subscriber and contributor was acknowledged and the top contributors were surfaced too.

Stats about top contributors calculated at credit runtime.

A list of current subscribers, updated just before it went onscreen.

We also hyped a 'Soul Match' gag repeatedly throughout the show, held until the very end of the credits, in which anyone who contributed to the show during the episode was entered for a chance to match with another contributor, or, if they were extremely lucky, with The Founder of the cult - a semi-conscious cadaver who is currently in cryo suspension and only communicates through a receipt printer.

Worshipper internetexplorer matched with THE FOUNDER!!

RIGHT in the MIDDLE
(not too intrusive, not too expensive)

You’d think this category would be where we focused most of our interaction efforts, but most of our ideas were on the more extreme ends of the spectrum. With that said, voting in the ‘offerings’ gag below was likely the most common action taken by any given user.

Parimutuel Voting
(Offerings)

Onscreen tool tip tutorializing the offering mechanism. These kind of tips were crucial to bringing the audience along with the operation of the various interfaces around the show without requiring the Leader to break the fourth wall and confront the interaction mechanism in a way that didn’t feel funny or fun.

As an enticement to spontaneous audience coordination towards a common goal, we implemented a 'parimutuel voting' system, in which viewers could assign and re-assign their vote at any time among three options, but the decision event wasn't triggered until a certain number of total votes was cast, and until a certain percentage of those votes was cast for a specific option. This required some consensus-building.

The worshippers have offered the Leader some Hyper Mist

When these threshold numbers were large, it meant that viewers had to coordinate in the chat to align opinion around a specific (often ridiculous) choice - in this case among sight-gag prop jokes framed as 'offerings' to the Leader. When the threshold was reached, a sound and title package would overwhelm the frame, insisting that the Leader (and the guest, if they were on camera), accept the offering of their followers by enjoying a Pleasure Cylinder, or a Cryo Gel, or some Hyper Mist, or Rocks, or a Dark Secret, etc. Prop comedy ensued.

Ads

An ad for something called ‘Success Tabs’, triggered by viewer stephalope’s tip

A simple way to gate the rate at which viewers take somewhat obtrusive onscreen interactions is to literally price them, in dollars, or, in the Twitch world, in a currency called 'bits', which cost dollars to buy.

The idea of onscreen ads paid for by viewers seemed like a good reward for a nominal contribution of around a dollar, so at any time a viewer could 'buy' an 'ad' for a random product they didn't know in advance, and the ad bubble would appear and float up over the action, featuring some randomly-selected ad copy and their username.

DISRUPTIVE and EXPENSIVE

On the very disruptive and expensive end of the interaction spectrum, we had options in both collaborative action and individual action mechanisms.

Bit Meter: Aggregate Tips

At all times in which the show was open for interaction, which was any time the musical guest wasn't performing or saying something especially personal, there was a meter in the top right of the screen charting the audience's progress towards triggering a disruptive action. The meter was moved along via bit contributions (monetary tips) to the show, and when it filled up, it triggered one of a few actions, usually every 30 minutes or so.

A favorite was the 'Flashback', where the screen would go black and white and wavy (classic flashback effect) and the host would be overcome with an often painful and bizarre improvised memory from the show's backstory.

Next tip of 96 or more bits kicks off the flashback.

“Oh! Ohhhhh!! I’m suddenly remembering my father’s accountant, Sol … god he was a terrible accountant!”

Absolution and Affirmations

The two most show-stopping interactions a single user could trigger were the Absolution and the Affirmation. Users could only invoke these effects by spending a large amount of 'channel points', a kind of loyalty points Twitch viewers accrue on a channel-specific basis. This is a currency the channel administrators can choose if and how to allow the viewer to redeem, and for what.

In our case, viewers could spend a large number of channel points - an amount it would take a very active participant almost an entire episode's worth of engagement to accrue - in order to purchase from the leader an Absolution or Affirmation. This was deliberately priced at a level that was only attainable by serious fans who were willing to invest a lot of their slowly-earned channel points into one big action.

The Affirmation was conceived as a sincere response to the psychic turbulence of the real-world moment, intended as a beat where the Leader, speaking directly to the viewer, calling the viewer out by username and looking directly into the camera, would tell them that they are OK - that they are good enough the way they are, and that they are loved, filtered through an intense, throbbing color LUT and a tunnel-vision blur effect.

The Absolution was a similar gag, although delivered in a more tongue-in-cheek fashion, in the style of a megachurch preacher, where the viewer would purchase absolution for a specific sin they described at the time of purchase.

In both cases, the actor playing the Leader would see the username and the Affirmation or Absolution (and the sin requiring Absolution) waiting in a queue on their HUD screen, and as soon as they could get to it, they'd begin the acknowledgement and the TD would trigger the effect let the moment take over the show.

In addition to verbal exposition from the host and reminders from the moderators in the chat, occasional onscreen tips would guide users to purchase Absolutions and Affirmations.

An Absolution for viewer seansatomusic

Wrap

There’s more to cover but we hope you get the idea of how we applied the framework to our strategy in a way that enabled as many viewers as possible to experience the frisson of having the TV talk to them while still running a (barely) coherent show.

If you’d like to read more about the production, check out our case study here. If you’d like to see us make more Elsewhere Sound Space, help us find a sponsor!! Seriously, we need a sponsor.

If you’d like to read more about the tech behind the production, we did a couple of long-form interviews on the topic and the videos are linked in this blog post.

If you’d like to make another show like this with us, reach out. We’d love to spend all our time making this show or something equally wild and ground-breaking.

Tuesday 08.01.23
Posted by CHIEF
 

ELSEWHERE SOUND SPACE TECH INTERVIEWS 2021

In the past months I’ve had the opportunity to talk about the software behind Elsewhere Sound Space from two different angles in interviews now up on YouTube.

For TouchDesigner InSession with Derivative founder Greg Hermanovic, technical director Markus Heckmann, and director of community Isabelle Rousset, I talked about the creative process of building the dozens of shots and interactive music videos that went into the show.

It’s always fun to talk with the Derivative folks — while as the creators they know the software better than anyone, they retain an intense, almost childlike curiosity about the ways in which their program is used. It didn’t hurt that they’re big fans of the show.

With the Interactive and Immersive HQ’s boss Elburz Sorkhabi, the conversation was focused on the use of Python in TouchDesigner to enable a modular architecture — a design tenet that was critical to keeping the growth of the show platform creative and loose, even when making new show elements in the moments before we went live.

Burz and I disagree about some of the methods we discuss in this interview so it might be interesting for some viewers to hear a debate over architectural style for a framework like TD.

Thursday 06.03.21
Posted by CHIEF
 

INTERACTIVE IMMERSIVE HQ INTERVIEW 2020

If you're interested in installation artwork, the business of creating permanent digital artworks, Platonic solids, TouchDesigner, UV mapping, unwrapped rendering, LED sculpture, or all of the above, you might enjoy this.

Elburz Sorkhabi is a great host and we're talking about the Rosetta Hall installation, which is yet to be documented on the site (sneak peek!)


categories: lighting, projects
Friday 10.09.20
Posted by CHIEF
 

PSST.ONE INTERVIEW: BERLIN 2018

When in Berlin for the TouchDesigner Summit and very very food poisoned (thanks Elburz), I did an interview with the very funny and incisive Andrea from psst.one.

This edit heavily features clips from a 2016 month of daily visual experiments I created and documented in one hour each, which is funny to see alongside me being all serious about my ‘work’. Thanks Andrea for making me look relatively coherent and for the nice lighting.

Tuesday 04.10.18
Posted by CHIEF
 

TOUCHDESIGNER SUMMIT 2018 BERLIN TALKS

Recently I had the opportunity to speak at the Berlin TouchDesigner Summit, hosted by Derivative, makers of TouchDesigner.

I gave a talk on using TouchDesigner and GLSL to make a fluid simulation using the Navier-Stokes noncompressible fluid advection formulae, and I lead a workshop on drawing in python for plotters like the AxiDraw. Both those videos are online now.

The workshop is a bit long and didn't move as fast as I'd liked - it was billed as an intermediate workshop but the crowd was of a mixed skill level so we paused a lot for bug hunts ... in retrospect I should have made the step by step instructions a little more clear.

In contrast, the GLSL talk moves at 100mph the whole time, so you don't need to watch in fast-forward.

Enjoy!

tags: touchdesigner, glsl, talks, python, plotters
Tuesday 03.06.18
Posted by CHIEF
 

ANCILLARY MAGNET is now HARD WORK PARTY

It's official. Filed a DBA and got a seal on it and everything.
New URL: hardwork.party
New logo: a basic equilateral triangle, soon to be animated a zillion ways


New colors: khaki drab (230/223/213) and warm grey (85/86/87)
New name: Hard Work Party - both easier to spell and more fun ... but all with the same great flavor you've grown to love. Seriously. Same great flavor.

Tell your friends!

categories: dba
Tuesday 09.20.16
Posted by CHIEF
 

BODY LABS PODCAST →

Last week I was a guest on the Body Labs podcast, talking about the present and future of virtual and augmented reality with host (and old friend) Eric Rachlin ... we talked about older VR tech, storytelling in VR, and augmented reality UI challenges, including Hololens, Leap Motion, and conversation on many of the topics covered in these two previous posts.

tags: augmented reality, virtual reality, podcasts, audio
categories: augmentedreality
Thursday 01.28.16
Posted by CHIEF
 

HoloLens: MICROSOFT is LIKE THAT WEIRD UNCLE THAT ALWAYS HAD COOL SHIT →

ipd.jpg

Today I got a chance to check out Microsoft’s HoloLens augmented reality headset at a developer demo a few floors above their flagship NYC store. I received an invite to try the thing out because I have a Microsoft developer account. I have a Microsoft developer account because I’ve done some creative work with the Kinect.

I posted another piece on Medium recently about the problems with AR and have since been giving the topic a lot of thought. I’ve got a few more AR posts in the queue.

Just a bit more preface: this post is as much about that developer demo event as it is about the hardware and the content Microsoft chose to present. I’ll justify as we go along.

There appears to be a segment of the Microsoft hardware dev team that has wizard-like powers. The Kinect 2 is superlatively excellent at what it does, and it does so at a price point (~$150 + a $50 Windows adapter) that betrays its loss-leader position in their Xbox ecosystem and has made it an indispensable tool in the world of interactive art.

As you’ll read below, the HoloLens is a more-than-adequate seminal entry into the world of high-fidelity (in contrast to the low-fi of Google Glass) augmented reality. Some smart decisions were made with the interface elements in the demos presented, and the hardware, as mentioned above, must be the creation of some kind of warlock, or shaman, or kahuna, or witch doctor, truly. Some strong medicine.

With all that said, it seems that Microsoft just can’t get out of their own way. The experience of attending this demo was cripplingly awkward, and I say this as someone who has just come back from CES, arguably the best place to get a sweaty handshake and inconsistent eye contact in the Northern hemisphere. That stuff is going to be a part of this post too, not just because it’s entertaining, but because it sheds some light on MS’s marketing angle with the thing. That’s gonna matter when AR meets the public.

AR has some dorky optics to begin with. You look like an asshole using it—an issue that must be overcome if AR is going to make it in the wild—and if Microsoft is going to lead the second charge to bring it to the public (and they could, with their reach and this amazing untethered device), I’m afraid all of AR will be seen through their dweeby lens.

There are two puns in that paragraph.

Read More

tags: augmented reality, ar, wearables, hololens, microsoft
categories: augmentedreality
Thursday 01.14.16
Posted by CHIEF
 
Newer / Older

©2016 HARD WORK PARTY