Friday, April 21, 2017


Way back in 2014, I wrote a couple of speculative articles on Roles in Final Cut Pro X.

In the first article entitled ROLES, I detailed the challenge of organization in the current FCP 10.1 timeline, and how I thought Roles could provide a solution to the lack of traditional Tracks.

In the second article ROLES REDUX I went more in depth with Roles as an organizational tool- how they function and what improvements could be made to make assigning Roles faster and easier.

SMASH CUT! to two years later.  In October of 2016 Apple released Final Cut Pro 10.3- which along with a host of features and improvements, also added this feature—

Needless to say I was surprised and delighted.  As the old saying goes, “Great minds think alike”; or in this case it would probably be more apt to say, “Lesser minds like mine occasionally stumble onto ideas that greater minds have already thought of”.  I’ll take what I can get!

Using 10.3 since it's release, I’m struck at how solid and well thought out Apple’s implementation is.  Apple has really fulfilled the promise in the power of Roles on a lot of fronts.

Assigning Roles is easier across the board.  Roles can now be assigned at the Import stage, and Apple have even added the ability to directly access iXML metadata from Audio Recorders , a feature only available previously via 3rd party apps like Sync-N-Link X.  If you don’t have the benefit of on-set Role metadata, you can now assign Roles to individual or multiple selected clips in the Inspector.  You also have access to assigning Subroles to individual Audio Components, which wasn’t possible without jumping thru some serious hoops in earlier versions of the application, or via round-tripping via the aforementioned Sync-N-Link X or Role-O-Matic by Charile Austin.  Finally, Roles can be reassigned directly in the Project window, though there are some necessary restrictions on Audio Component reassignment for Sync, Compound, and Multicam Clips.  But overall this is a MASSIVE improvement in workflow. 

And as for Magnetic Timeline 2?  What can I say?  All the power of the Final Cut Pro’s Magnetic Timeline, now with customizable organization structure.  For a long time I fought with people who could not imagine the concept of audio organization without traditional Tracks.  But once again we see that Final Cut Pro’s deeply embedded metadata foundations are the solution for long held analogue concepts.  As I theorized back in 2014- if Smart Collections were the smarter, better answer to traditional bins; then Roles provides us with an infinitely superior and more fluid way to organize audio in the timeline, while at the same time retaining all  the flexibility and "Picture First" ideology of the trackless timeline.  Let’s take a look at how an edit from the show CANADA CREW (first edited in 10.2) looks when updated to 10.3.

Organization is not just possible now, but automatic and fluid.  Properly tagged audio instantly arranges itself by Role/Subrole in the Project timeline.  The new Audio Lanes can be arranged at will via the Timeline Index, allowing you to seamlessly re-arrange your audio landscape.   
Additionally, Role Focusing allows you to quickly minimize all but the selected Role, so you can concentrate on the audio elements that are important, while not loosing track of the bigger picture.

Next, Apple has created a new Roles HUD, which allows you to create, delete, name, rename, and combine Roles.  And the great thing is, changes made here propagate Library wide- clip instances in both the Event Browser AND those already in edited Projects.

Finally, Roles now have customizable colours, making the elements of your audio soundscape clearly definable and easy to navigate.

And all these improvements come with a very welcome UI overhaul, which flattens and simplifies the interface, putting the emphasis on the content, and bringing Final Cut Pro in line with Apple’s current design aesthetic.

Whew!  Well, now that that’s done, I guess we can all sit back and be happy forever, right?

Ok, who am I kidding…?

When I wrote that first article, there were two aspect of Final Cut Pro that I thought Roles would be key to solving. First: organization.  Second: improved audio mixing.  Enter:


One of the bigger changes to Final Cut Pro 10.3 was a fundamental restructuring of how Container Clips handle audio. Container clips include Compound Clips, Sync Clips and Multicam Clips- basically any kind of clip which is made up of other audio and video clips.  If you Compound Clip a Project, instead of a simple stereo or surround mix-down, you now have access to Mixdowns of all the Roles in Compound Clip- Apple calls these "Role Components".  Further, an option in the Inspector allows you to drill down even further to view individual Subroles.  This gives you the ability to add audioFX and level adjustments to elements at the Subrole, Role, AND Master Mix level.  This is really powerful and reveals just how much work has gone on behind the scenes from an audio standpoint.

Role Components now visible when Project is nested in a Compound Clip.

Apple's own WhitePaper on this, Understanding Audio Roles in Final Cut Pro X, goes much more in depth on this subject and I encourage people to read it (perhaps several times like I have).

From a practical standpoint, these enhancements make mixing in FCP X more flexible and powerful than ever before.

THE CHALLENGE (and why I don’t think this is the end of the road for Roles)

Though the 10.3 improvements make mixing if Final Cut Pro X far more possible than before- I’ve been thinking about what enhancements could be made to take Roles even further.

The key question is whether Mixing should be considered an online process, one that is only done after picture is locked, as it is when you export your sound to an audio engineer. 

The current solution of Compound Clipping to access Role and Subrole Components provides a lot of power and depth- with the new audio chain expanding the functionality Roles as buses, and Compound clips as mixes and sub-mixes.  However, a series of consecutively nested Compound Clips can abstract the Project you’re mixing in from the one you’re editing in, making any potential editorial changes difficult.

While changes to a Project immediately filter down to it's Compound Clips, any key-framed effects or audio levels/pan adjustments added to the Compound Clips become disconnected from the timing of the edit, so if 1 second is added half way thru your Project; any keyframes in the Compound Clipped Project after that timing adjustment are now out of alignment by 1 second.

In-Line Project Role Components

But what if mixing could be an organic part of the editorial process, allowing you to build a mix as the edit is evolving without fear of extra work if (when) the edit changes.

With the addition of the Audio Lanes view of the Magnetic Timeline 2, Apple has shown a willingness to allow for alternate or advanced display modes for the Project view.  So perhaps an additional function for "Display Role Components" can integrate the advanced mixing functionality currently available via Compound Clips directly into a single, combined Project view.

In this proposed UI scenario, the display of Role Components (I've called these "Mixes" in the diagrams below) is integrated into the existing Project view, rather than requiring the Project to be nested within a Compound Clip.  This would posit that a Project is inherently a Container Clip, and that there is simply no current way to view Role Components within the current user interface.

For the purposes of demonstration, I’ve created an simplified Project view which allows us see the concepts I’m suggesting more easily.  

Below is a sample Project containing Video, Dialogue, Effects, and Music.  The Dialogue Role contains two Subroles [Dialogue-1 and Dialogue-2].

Much like the new FCP 10.3 Audio Lanes view, a proposed Role Component View could be accessible via a new button added to the bottom of the Roles pane in the Timeline Index, which  would allow you to toggle Role Components view ON/OFF.

Project Role Components Show/Hide toggle

When selected, a Role Component (or Mix) for each Role appears, and the original audio clips for each Role are minimized.  I think it’s important to to see audio elements for context, so you never loose sight of what’s going on editorially.  Additionally, a grey Mixed Role Component is added beneath the Primary Storyline, representing the sum "Master Mix" of all Roles in the Project.

Role Component view with collapsed clips beneath for context, and “Master Mix”.

However, if you still need access to the full height audio clips for editing, you can expand the original clips by toggling the “Focus” button for a given Role in the Timeline Index.

Role Component view audio for Effects-1 Subrole expanded.

If you need to apply an effect to an entire Subrole, say the “Robot Voice Effect” to Dialogue-2 for example, you could activate Subrole Components in addition to Role Components, just as the current check-box option in the Inspector for Compound Clips.  To maintain audio clip context, the individual clips could appear as faded elements in the Role Component container.  You could still access and edit these clips, but if you drag an effect to the Subrole Container it will apply the effect to all the clips within the Subrole Mix.

Subrole Components enabled for Dialogue-2.

Activating individual Subrole Components could done via a new button beside it's listing in the Roles Index, which also shows that Role Components are active across the Project.   Here I've pillaged the existing "Mixdown" icon that appeared with 

Individual Subrole Component View [Dialogue-2 active]

During editing, you may want a mixed view- here's another visual of our Project with original clips for Music and Dialogue-1 expanded, collapsed for Effects-1, and Dialogue-2 in a Subrole Component.

Mixed Role Component View with original Subrole clips for Dialogue-1 and Music-1
expanded,and Subrole Component for Dialogue-2 activated.

Regardless of whether Role/Subrole Components or their original clips are minimized, expanded, or hidden- Project audio would always play the result of the Master Mix.  The exception being if you play a Clip, Role, Subrole, or individual Role Component via Clip Skimming.


In scenarios where you might want to add volume adjustments or effects to a subset of components across multiple Roles, a Mixed Role Component is used to represent these "Sub-Mixes".  A sub-mix could be an effect applied across an entire Project, or trimmed like a Compound Clip to be only a section of a Project; a scene or sequence.

In the following example, our two characters go into a cave.  DIALOGUE and EFFECTS Roles need to be placed into a Mixed Role Component (or sub-mix) with an Echo effect added, while the non-diagetic music is unaffected.  Keyframes added to the Mixed Role Component would indicate the amount of influence the audio effects of the sub-mix have on the assigned Role Components- fading the Echo effect up and down as the characters enter and leave the cave.

Selecting this sub-mix in the Project viewer shows which Roles or Subroles it is influencing, in much the same way that selecting a video clip highlights all it's attached audio components.

Mixed Role Component sub-mix adding Echo effect to Dialogue and Effects Roles.

Right-clicking on the sub-mix in the Project window allows you to add or remove additional Roles to that sub-mix.  Selecting a Role will assume you want the Sub-Mix to affect all Subroles, or you could select/deselect Subroles individually.

Modal Dialogue to assign Roles to Sub-Mix.

Once a Sub-Mix is created, it is added to the list of Roles in the Timeline Index, where it could be named for easy identification, and you can see the list of Roles/Subroles that it's effecting.  

Role Index showing Sub-Mix Mixed Role Component,
Expanded to show effected Roles.


In a more integrated single-window mixing scenario, better audio keyframe selection and adjustment across multiple Role Components would be beneficial.

In the current Compound Clip scenario, you can only select and move audio keyframes a single Role Component at a time.

If the timing of an edit changes in a way which affects the timing of audio keyframes across an entire Project (for example, the extension of a clip by 1 second half way thru an edit), then the ability to range select keyframes across multiple Role Components would allow you to globally shift those keyframes to correct for any timing adjustments to the edit after Role Components have been created.  Either using the mouse, or by entering a numeric adjustment value.

Project view showing Keyframes selected across multiple Role Components.

Alternatively, if Final Cut Pro was aware of the downstream relationship between a Project and it's Role Components, then it's possible that the program could make automatic timing adjustments to any Role Component audio keyframes after a given editorial adjustment.

In the example below, 2 seconds is added to a clip at 15 seconds in the Primary Storyline.  As a result, all Role Component keyframes after 15 seconds are shifted +2 seconds automatically. 

2 seconds added to Primary Storyline clip results in any keyframes for
Dialogue and Music Role Components after the edit point being
automatically adjusted by +2 seconds


If all this seems like more complication in the Project window... well it kinda is.

Added functionality will naturally breed added complexity.  But like with Audio Lanes in Final Cut Pro 10.3, that complexity is hidden from users who don't require it.  Advanced view-modes allows the Project view to scale with the user, from beginners with simple audio needs and a small number of elements, all the way up to complex mixes for final delivery.

This proposed integration of Role Components directly into the Project UI potentially allows for a single-window interface that makes mixing more integrated and fluid, while maintaining the benefit of Final Cut Pro X's trackless "video first" timeline.

In creating the visuals for this blog post, and thinking thru some of the concepts, I ended up making a short video that may give the text a bit more context.

Friday, April 29, 2016

Apple Motion: 3D for 2D

Late in 2015 I had the opportunity to produce 2 spots for Children’s Wish Foundation in Canada. The process behind the visuals was a bit unique, so I thought I’d detail it here for those interested.  

Motion is always a lot more verbose an application than people give it credit for.  I'd wager it can do most of what a lot of editors  are using After Effects for.  Honestly I think the biggest thing holding it back is some key 3rd party support by the same plug-in makers that bring a lot of the muscle to the table in Ae.  I’m looking at you, Trapcode and VideoCopilot!

The concepts behind the spots had been scripted and storyboarded during the Summer of 2015, with an eye to a handcrafted aesthetic.  The mandate was for simple colours and textures, and we planned to animate the characters using pretty standard Forward Kinematic puppet techniques.  Early on, we’d considered building backgrounds out of actual construction paper, but we realized we didn't have time for real-world construction. We also considered backgrounds built out of texture elements in Photoshop, but I was looking to add some extra pizazz to the design beyond a simple texture collage.   Whatever route we took, at the time we were given the go ahead- we only had 3 weeks to produce both 30 seconds spots from character design to final delivery; so we had to be very schedule conscious.

Character Designs by Ben Mazzotta

With Approved character designs as our touchstone, I started investigated adding 3D geometry and shading over flat textures.  As a Apple Motion user, the answer was obvious: mObject, the 3D text & object tool from one of the best 3rd party developer’s for Final Cut Pro X and Motion- MotionVFX.

mObject interface

Since it’s release in 2014, mObject has been updated several times with new features, UI enhancements, and overall performance improvements.  And while the 3D Text feature added to Final Cut Pro X 10.2 and Motion 5.2 last April has potentially "sherlocked" one aspect of mObject’s feature set- it’s ability to import and work with full, textured 3D models in either FCP X or Motion continues to make it a compelling purchase.

mObject can import .obj files with high polygon detail and high resolution texture maps.  But for my purposes I wouldn't be needing either of those things!  My plan was to build sets out of 3D objects, and then overlay the shadow information onto simple textures; bringing something extra to the backgrounds that at the same time wouldn't conflict with the approved character design.

One of the most important additions to mObject since it’s release has been the addition of Ambient Occlusion [AO].  This feature simulates the bounce lighting that occurs in the real world, and I remember distinctly when this became a “thing” in 3D animation working with Lightwave3D in the early 2000’s.  It was an immense render pig that created some very beautiful results even with simple models.  And now here we are with simulated Ambient Occlusion in Motion via mObject.  What a world!

I thought Ambient Occlusion would bring a nice feeling of real-world "miniature sets" to the backgrounds.

Ambient Occlusion off (L) and on (R)

For the purposes of this tutorial I’ll break down the first shot in the second of the two spots.  There’s a lot going on so it makes for a good case-study.

Sample storyboard.
Based on my storyboards, I constructed the basics of the room out of Primitives that come built into mObject.  This gave me a quick and simple way to roughly block out the room for the camera, without getting bogged down in a lot of the details.  Because the timeline was so short, it was important for me to be able to supply the animator with blocking so he could get drawing asap. In the end, I actually ended up using much of this basic architecture in the final scene, as these primitives are light-weight... and really, a wall is a wall!

Basic Geometry of hospital room created with Primitives in mObject

Next came finding scene specific furniture and decorations.  TurboSquid was my go-to source for all the models in the spot.  It’s not a free service, but if you’re getting paid for something like this, then a couple of bucks isn’t much to pay for good models considering the time you're saving.  I think the most I paid for any individual model was $30, with a total expenditure across the 2 spots of about $150.

Again, my goal here was finding high quality but simple models.  It almost would have been easier if I was wanting to do something MORE detailed as far as objects were concerned, since that’s what’s assumed most people are looking for.  “Simple” models can often mean low-polygon, and that can be seen in the final renders.  But in the end, TurboSquid’s selection was broad enough that I was able to find something for everything I needed.

One thing to note for anyone who’s planning on using downloadable models in mObject.  Even if the models are in the correct .obj format- it can be a real voodoo whether textures import properly; depending on what program the model was created in, and how it was converted.  Knowing the basics about a good 3D modelling program like the open-source Blender can be a real help here.  I didn’t need the textures, but it can be essential for cleaning up models and deleting elements you don’t need, since there’s no ability to alter model geometry within mObject itself.  After import into mObject, all the models were resurfaced with a plain white texture, with no reflectivity or specularity, since all I was really  looking for is form, and how it casts and receives shadows.

Here’s the scene with all of the final models in place in mObject.

Room with final models.

And here they are in the final shot.

Basic scene setup from Motion camera view.

At this point I provided final reference images to the animator, who could then move ahead while I worked on finalizing the scene.  The characters were created in Photoshop, and animated in After Effects, since that's the program he was most comfortable with.  Some animation for both spots was done in Motion- and technically there's no reason why it all couldn't have been.  Just timing.

Lighting mObject scenes can be done with standard lighting setups created within mObject, or using Motion’s own lights.  Important to note here that for these lights to work, they must be within the same Group as the mObject generator.  In this particular scene, I was simulating a time-lapse shot, so I needed to animate some key lights and overall ambience as the shot cycles from day to night and back.

Lighting setup with Ambient Occlusion turned on.

This leads us to the most complicated part of this project- the 2D texturing.  I’d built up a library of paper, wood, fabric, and subtle metal textures, and mocked up the final texture layout using my temp stills in Photoshop.  This way we could be sure that the background textures and colour pallet wouldn't clash with the character designs.

Originally, I had hoped to apply the textures  in mObject using a front projection map which, as the name implies, projects the textures from behind the camera onto the objects, so that the textures DON’T wrap around the models the way you'd normall want.  Unfortunately, projection maps aren’t amongst mObjects various surface wrap modes.  Drat!

This meant I had to create individual hold-out mattes for each of the object and textures in the scene.  Basically, this meant turning on and off the different objects, or specific surfaces on a given object, and using that as the Image Mask for the raw texture layers.  In a perfect world I could have actually used multiple mObject generators for the Image Mask (and this does work), but having duplicate mObject generators in your Project can cause real slowdowns… and crashing.  Oh, so much crashing….

So, once I was happy with the setup of the scene and any camera moves, I rendered still pngs or ProRes444s movies for each texture element.  This took a while for complex scenes with lots of textures… but hey!  Art!

If there was Camera movement in the shot, it was important to match the Z position of the  texture planes to match the objects properly in 3D space, so that textures wouldn’t “slide” relative to the holdouts. Below is a sample of the textures with the Image Masks applied.  You'll note I ended adding perspective to the floor texture; it's perspective made it look too weird when the texture was completely flat. 

Flat textures with Image Mask holdouts from mObject.

After the Image Mask renders, the final pass from mObject was the lighting pass with camera movement and all models turned on, so that you can get all the interactive shadows and Ambient Occlusion interactions in the scene.

This image or movie would then be reimported and overlaid over the texture layers using the Multiply transfer mode, thereby applying the 3 Dimensional shading to all the flat 3D textures.  This was the toughest shot in the second spot, so in the interest of time the push in was achieved with a simple scale shift in Final Cut.  If we'd had time, an actual 3D camera move like you see in other shots would have been my preference.

Shadow pass overlaid onto flat texture layers.

Characters were then added in, and if necessary some of the texture hold outs were used as Image Masks to put characters “between” layers of the background, for example in this scene behind the bed or out in the hallway.

Final Composite with Characters.

In this first shot, simulating a time-lapse, there's actually no "animation" per se, but lots and lots of still drawings cycled at a regular interval.  A high speed cloud or stars layer was added outside the window, and elements like the blinds, bed cover, and TV were animated to add some more movement to the room. The fast moving shadows on the floor and wall help to sell the effect too.  Finally,  bit of grading with Color Finale to show the shift to night and there you have it.  

We had fooled around with different effects to try and enhance the time-lapse effect, but the characters just ended up looking smeary and out of line with the overall aesthetic.  In the end, turning on Frame Blend in Final Cut Pro X's retime window with a very minor speed change gave us just a hint of a fade as the characters appeared, disappeared and moved around in the shot.

Each spot ending up being about 10 days of total production.  Here's the second one, Izaak Gets a Gaming System, in it's entirety.

I use mObject a lot for corporate work, putting clients websites on 3D models of Phones, tablets and computers.  For that alone it's made it's money back for me.  This was a fun exercise in blending 3D and 2D in an interesting way, and I honestly had no idea what the results would look like when I started.  Surprises are great... as long as the client likes them too!

Tuesday, October 14, 2014


My most popular article was the one I wrote last year on ROLES.  I’ve received lots of great feedback on it and a lot of great questions as well.

So I decided to take that feedback and roll it into two followup pieces looking at the two places I think Roles needs to expand to meet their full potential- and prove once and for all that a “trackless” timeline needn't give up functionality for the fluidity it provides.

In this first article, I’ll look at what Roles are, how they’re applied, and different ways the process could be improved for efficiency.

For the purposes of illustration, I’ll be using screen grabs from NOW YOU KNOW, a 39x7min Kids TV series that I edited on X this year.  Thanks to Little Engine Productions for permission.


Roles are accessed via the Modify>Assign Roles menu.  By default, there are 2 video Roles [Video and Titles], and three Audio Roles [Dialogue, Music, and Effects].

Right away FCP X does something clever- it automatically assigns Roles to all imported media, and it’s actually right a surprising amount of the time.  Any audio that comes in attached to video is assumed to be Dialogue, which for most shoots it often would be.  Pure audio tracks are tagged as either Dialogue, Music or Effects.  In my own experience over the past 3 years using X, it’s usually correct about 3/4 of the time.  I’m not sure how Final Cut is making this call.  Channel Orientation?  Length?  Some hidden metadata that I’m unaware of?  Regardless, it’s great that we’re not just given a blank slate and have to tag everything from scratch.  And in many short form projects, these three Audio Roles may be all you need to get the job done.

If you want to see what Roles Final Cut has assigned for an individual clip, you can find it in the Inspector window Info tab

You can also see Roles for all clips in an existing Project by changing the Timeline appearance settings to “Show Roles”.

And you can see all the Roles currently used in a Project via the Timeline Index.  Here you can highlight specific Roles.

But the default Roles are just the beginning.  Modify>Edit Roles allows you to Create not only new Video and Audio Roles, but Sub-Roles for existing categories.  As I mentioned, Dialogue, Music and Effects can probably get you by in a simple project; but in longer or more complex timelines more granularity will be needed.  So besides other obvious categories like Ambience and Voice Over; Sub-Roles for Dialogue by character or different categories of Sound Effects can help to create more clean separation of elements that will have your Audio Engineer professing his love for you (or maybe just swearing at you less for once). 

Here are the list of Role I created for the first Season of NOW YOU KNOW.


Currently, applying Roles to a clip can be done in a couple of ways.

First is via the Event Browser.  You can change the Role assignment of a clip either from the Modify Menu, the Info tab of the Inspector, or via Command keys- CTRL+OPT+ M [Music], E [Effects], or D [Dialogue].  If you have multiple clips selected in the Event Browser, you can apply a Role change to all the selected clips at once, like you can with most attribute changes.  It’s always better to be able to do this before you start editing- assigning a Role in the Event Browser once means you don’t have to apply it to each individual instance of the clip after you’ve edited it into your Timeline.  The problem is that currently you can only apply ONE ROLE to a clip in the Event Browser.  Meaning if your camera has multichannel audio or you've created Sync Clips from second-source sound, and you want to assign multiple Roles or Sub-Roles to individual Audio Components, there’s no way to do it in the Event Browser. 

You CAN get around this limitation, by taking each clip and using the “Open in Timeline” function, which opens all parts of a clip into it’s own mini-Timeline.  From here you can expand your Audio Components just like in a Project, and assign audio Roles to each channel [thanks to Ben Balser for this tip], but doing this operation for each clip in even a medium-sized project, let alone a large documentary or feature film with 1000’s of clips would be insane.

"Open in Timeline" of a clip, with a single Audio Component selected.

But if you have lots of Audio Components and sub-Roles in your Project, you may be better off to just edit untagged clips into your Timeline and assign them later.  I say this because once you have your Project laid out with all the Audio Components expanded, you can multi-select lots of Audio Components across lots of clips that share the same Role or sub-Role and tag them en mass.  But unless you know the layout of the Audio Components by heart, or have taken the time to name your Audio Components before editing (which I do suggest), this can take a bit of diligent work.

Timeline with multiple components selected to add common Role: "Baboo"

The goal of Roles is to be able to attach and pass on Audio metadata in a way that isn’t slave to the static nature of Tracks.  But in it’s current incarnation, assigning Roles has several stumbling blocks.

First, the inability to effectively apply Roles to individual Audio Components either in the Ingest or Event Browser stages.

Second, Command keys cannot be assigned to custom Roles or Sub-Roles- meaning you have to apply them each time using the mouse via the Menu>Modify>Assign Roles or Info tab Roles drop down menu.

Third, like all metadata, changes made to the Role of a clip in the Event Browser do not “filter down” to clips already edited into a Project.  Neither is there an option for changes you make to a clip in a Project to filter to any other instances of that clip in the Project, or back up to it’s Master Clip in the Library.

Finally, a visual representation of a Project that organizes Roles for easy identification.  This last one is so important that it will be the subject of the second part of this article.


Let’s take a step back from all the holes in the current process to look at a ray of sunshine in just how great this can all work. It should come as no surprise that if something good is going on with Metadata, it probably has something to do with Philip and Greg at Intelligent Assistance.

People know Sync-N-Link X is an amazing tool for syncing dailies and automatically creating either FCP X Sync Clips or Multicam Clips from on-set timecode.  But the extra awesome part you may not know is that if your Audio Recordist has used equipment that utilizes iXML- then the names the Recordist gives those channels on set will be ported over to Final Cut and assigned as BOTH your Audio Component names AND Role assignments.  At present, FCPx does not itself support iXML, so this feature is exclusive to Intelligent Assistance’s software.

Sync-N-Link X Roles Interface

This means that without a lick of work, the editor has Role tagging for all Dialogue from a shoot done before they even start work.  Between this and Final Cut’s automatic tagging for Music and  Effects elements, the Editor has much of his Audio prep already done.

That’s just awesome and something I’m very much hoping to take advantage of on NOW YOU KNOW’s Second Season.


But for those of us not lucky enough to get on-set Roles assignments, there needs to be a more efficient way to assign Roles to large volume of clips BEFORE you edit them into your sequence.  Let’s look at a couple of ways this could be done.

First, we need the ability to assign more complex Roles to multiple clips at once at the logging stage.  Initially I wondered if this could be done at the Import Window, but I think much like Keywording, Batch Renaming, Syncing, and most other pre-editorial tasks; this would be better accomplished in the Event Browser.  The key improvement need to be-

  1. The ability to apply Roles to individual Audio Components
  2. The ability to apply multiple Roles to several clips with the same Audio Component configuration.
  3. Ability to apply custom Roles faster and more efficiently.

Here are a couple of ways this could be instituted.  None of these are brand new ideas, but leverage how we already apply other kids of metadata within FCP X.

If you know that clips have the same Audio Component configuration, you can change the name of individual Audio Components for multiple selected clips in the Audio tab of the inspector.  It would be great if we could assign audio Roles in the same way.

Mockup of selecting and assigning Role to a single Audio Component

Aiding in this would be a “Keyword”-style Roles HUD.  Allowing you to assign Roles and Sub-Roles to command-keys for quick assignment to selected items, either in the Event Browser or Project Timeline.

HUD for Assigning Custom Role Shortcuts

Another way to do this would be to assign Roles to a single clip, and then have the ability to copy/paste those Role assignments to other clips.  Right now Paste Attributes is a function exclusive to clips in a Project, so that would be new functionality in the Event Browser.

Mockup of Paste Attributes including Roles in Event Browser.

With 10.1, Final Cut Pro went from a dual Event/Project structure to the new combined Library model.  We’ve already seen this combined database pay off in new features like Used/Unused Event media indicators, and Roles would definitely benefit from increased communication between Project and Events.

This would mean that changes made to Roles in a master clip in the Event Browser could automatically filter down to all instances of that clip in edits already in Progress.  Or conversely, being able to make changes to clips in an Project, and upstream those property changes back to the media in the Event Browser, so that any further material from that clip that used will be properly tagged.

I could also see a big advantage to having a unified Roles interface, which could allow an editor to change already assigned Role names globally in a Library, say if a character name is incorrect and you need to change DIALOGUE JIM to DIALOGUE FRED, or if part way thru an edit Audio Post requests a specific naming structure.  Little things like this will make working with Roles easier for everyone.

Assigning Roles may seem like a lot of work, but like with many things in FCP X, up-front work pays off in spades over the course of the edit- especially if it goes on for months as in TV episodic, documentary or feature work.


In the second article on Roles, I’ll revisit the Roles-based Project organization.  Looking at how a complicated edit, with all different kinds of audio, would be improved by better visual separation.