![]() ![]() “By playing a lead role in specifying the feature set and UI/UX of the product, it's been rewarding to help direct the project as well as delivering that all important data set and the API's to access it,” he tells us.Īsked what impact a project like SmartROTO could make on the industry, Ted is quick to lend his thoughts. Overseeing DNEG’s involvement was Ted Waine, R&D Supervisor. When it came to rotoscoping as an artistic process itself, the studio brought a deep wealth of experience and insight to understand how SmartROTO could provide a superior workflow and better quality of life to the specialist rotoscope artist. The data set included over 650,000 artist animated shapes consisting of 125 million user keyframes. “Our opinion is that if we could save even 25% of an artist’s time, that would be really valuable, because rotoscoping is such a ubiquitous task in visual effects.”īoth DNEG and UoB had critical roles to play in achieving this aim, and in the overall development SmartROTO.Īs a major VFX studio with a large, dedicated roto team, DNEG was uniquely positioned to provide a truly huge data set of real production roto artwork for the learning algorithm. With SmartROTO, we’re then using this model to predict in-between keyframes, better than interpolated or tracked keyframes, to try and reduce the amount of time they spend having to finesse in the middle.” It's still a machine learning-based tracking and shape consistency model we’re still imagining the user sets up their initial shapes and a few initial key frames. Speaking of how this impacted the progress of the project, Ben continues: “The overall approach we've ended up taking is pretty similar to what we planned to do at the beginning. ![]() As a result we learnt a lot more about how artists really work.” “Ultimately, this culminated in the realization that if you want to keep in-line with artists, you need to work around how they work. ![]() This is compounded by other difficult cases such as rotoscoping motion-blurred objects, and shapes being occluded during parts of their lifetime.” ![]() “For example, artists don't necessarily put all the vertices and edges of shapes along corresponding image features and edges they may just decide to put the middle of the shape elsewhere, resulting in various edge cases that become harder to track. There's a lot of considerations in the rotoscoping process that need to be taken into account before starting out on a project like SmartROTO.” “The principal learning is that rotoscoping is very hard and that people are going to be involved, certainly for the foreseeable future. “It’s been a great learning experience,” he tells us. Having worked on the project since its launch in 2019, Ben is a key member of Foundry’s A.I.R (AI Research) team and is perfectly placed to provide insight into SmartROTO’s progress and evolution over the past 24 months. Two years later, as SmartROTO wraps, what have we learned, where did these challenges leave the project, and what’s in store for the future of ML and rotoscoping? We caught up with Ben Kent, Foundry’s Research Engineering Manager, plus key members of DNEG and the University of Bath to explore all this and more. Deploying ML tools in VFX pipelines, data sharing and artist participation were just a few challenges that the project navigated in the hope of ultimately solving. The project was not without its challenges, particularly as modern machine learning has shifted the focus from a machine-centric approach to a data-centric one-so the success and performance of the tool is almost entirely dependent on the quality, diversity and size of this data set. The idea was that artists would create a set of shapes and a small set of keyframes, and SmartROTO-specifically, the ML tech behind it-would speed up the process of setting intermediate keyframes across the sequences. The project aimed to speed up the rotoscoping process-traditionally laborious and time-consuming-via artist-assisted machine learning. That’s why, back in 2019, we set out on project SmartROTO accompanied by leading global visual effects company DNEG and the University of Bath (UoB). On this line of thought, ML works best when artist and algorithm both work in harmony rather than in contention. It’s Foundry’s firm belief that ML is about accelerating artists helping them achieve results they couldn’t before and getting to final creative faster by removing drudge work. As technology advances, machine learning (ML) continues to make waves across industries-not least visual effects, animation and content creation, which each benefit from the time-saving qualities it promises. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |