NAB 2024 Review – Focus on Value Added, AI, Hybrid Cloud and Edge Computing

Maarten Verwaest
April 23, 2024

The key takeaways

  • Back to business – profitability and efficiency are the main drivers of change
  • AI is everywhere
  • Cloud technology increasingly the backbone
  • Cameras get more pixels and become smaller and become smarter at the same time
  • Edge computing is winning ground for better privacy and reduced turn-around times

Industry trends

Apart from the more technical themes like AI, cloud computing and workflow automation, NAB Show 2024 went back to the industry basics of optimising for operational excellence, usability, efficiency and profitability. Within that context, NAB 2024 was packed with a lot of great products, services and technology.

  • Profitability: IABM analysis shows that business confidence in MediaTech is down from 2023, driven by a pessimistic outlook and cost considerations. 2024 has started off with a significant number of cost reduction initiatives including large-scale staff cuts by Sky, Paramount Global and Channel 4, as they seek to manage costs and become profitable in the streaming space.
  • Consolidation: Strikes in Hollywood last year accelerated the industry’s move to hybrid business models. Increasing consolidation (in the form of M&A activity and joint initiatives) is helping media businesses to improve their hybrid offerings and target new markets with larger content catalogues. For example, last month Disney and Reliance agreed to merge their Indian operations, while Canal+ made a bid for Multichoice to expand its footprint from French-speaking Africa into South Africa. Closer to home, DPG Media made an offer to acquire RTL Netherlands.
  • Efficiency: Macroeconomic pressures have pushed tech budgets down in 2024, whereas investment in AI continues to grow. This reflects the industry’s focus on efficiency and need to ‘do more with less’. Efficiency, ROI and revenue generation increasingly become important drivers of technology investment in 2024. Some companies are also outsourcing investment to cut costs and focus on the core.
  • Integration: Application vendors seek to integrate with peers, allowing customers to cut out the man-in-the-middle workflow vendors thereby reducing processing time and improving overall reliability at the same time.
  • Accuracy and Usability of AI: Rather than cutting jobs, AI is now being positioned as a co-pilot (see a recent DPP podcast) to drive efficiency. Bare metal accuracy of AI is increasing, and also the overall usability is improving as AI point solutions become embedded at the core of real-life production workspaces.

AI has become a commodity, but there’s still a long way to go

AI was impossible to avoid; it was everywhere. It was almost comical, and it will be interesting to see how vendors can differentiate themselves through the actual applications of AI in their solutions (or otherwise). A healthy air of scepticism and pragmatism pervaded the discussions in Vegas, as visitors probed vendor claims while cooling the early days of hyperbole around GenAI’s upside-downing effect. There still remains a lot of misunderstanding as different types of challenges and solutions get mixed up.

It was also interesting to see that the real innovation is coming from startups and scale-up companies, rather than from established vendors who seek to apply emerging technologies rather than to drive innovation themselves.

In the category of Analytical AI (enrichment, audio and speech recognition, image recognition), companies like Moments Lab and Limecraft fight inaccuracy and data overload through multi-modal indexing. Post-production incumbents including Avid, Adobe and Blackmagic Design include speech-to-text for text-based editing.

In the category of Generative AI, Adobe Firefly is taking a leap ahead in automating VFX, which can be used to erase objects or to extend images and clips.

Furthermore, companies are increasingly pitching AI as a joint force (cf DPP AI Co-pilot), emphasising the very purpose of an assistant rather than a machine that makes jobs redundant. Here is the reasoning.

Schematic overview of the value added of an AI co-pilot. Rather than to make jobs redundant, AI should enhance the work of people, making sure they create more output and value added for the same effort and cost.

Rather than to make creative jobs redundant, AI in the form of a co-pilot (or an ‘edit assistant’, as suggested by Sandy McIntyre of AP) ensures that creative staff is capable of processing more content at the same cost. Thereby creating more output, better stories and greater value. Going forward, expect the training investment (both in time and the amount of data required) to reduce notably, meaning that the benefit will kick in faster and making AI accessible to smaller producers.

Machine Learning? That hype seems to have passed.

Cloud is now the backbone for media, and workflow vendors are reaping  the benefits (for the time being)

BigTech cloud providers (Microsoft, Google, Amazon) have, by luck, fallen into the position of offering AI tools to supercharge media workflows. At NAB, the cloud was demonstrated as more of a fixture for live remote production, ideal for bread and butter post-production workflows. New breakthroughs in transporting, processing and manipulating data in data centres are still being unlocked.

One of the few areas of the post-production workflow yet to be convincingly replicated in the cloud is colour grading. In part, that’s over concern that what one person sees in a calibrated suite might not be the same as what’s seen by a client reviewing the images somewhere else. Even this problem is being cracked, however. Panavision-owned post facility Light Iron demonstrated the end-to-end grade of an actual feature film, the indie project Penelope lensed by Nathan Miller, and graded in Baselight from proxy files.

Whereas the key benefits are scalability and ease of integration (through APIs), cloud adoption is hampered by cost and security-related issues (cf Sony hack). Cost concerns will be mitigated over time by challangers in the field. Security and privacy related issues by buyers maturing in their expectations.

Interestingly, cloud-native tools are genuinely easy to integrate. Vendors take the initiative, mitigating the integration risk and cutting out man-in-the-middle workflow solutions (remember the time when middleware was hot?). So, between the traditional make-or-buy alternatives, a new third option is becoming available –  ‘assemble’.

  • Buy: procure the full stack from a single vendor. Not very flexible and not the cheapest, but likely to work (e.g. Sony, Blackmagic Design).
  • Make: customise your application landscape around an established workflow solutions vendor (e.g. Embrace, SDVI, Carrick-Skills, Qibb).
  • Assemble: rather than to create an application landscape on the basis of an established workflow solutions vendor, there are more options than ever to look for a more directly integrated technology stack. This leads to a smaller operational footprint, better mean time between failure, and lower cost. The key enablers here are exposed documented APIs, methods for single sign on (eg using SAML), etc. Good examples are recent undertakings to integrate OOONA and Limecraft, or how Limecraft is seamlessly embedded in Avid MediaCentral.

Cameras get more pixels, and become smaller and smarter at the same time

Blackmagic jumped into cameras a decade ago with its first Pocket Cinema Camera and has been upping the ante ever since. At that point in time, Blackmagic’s president Dan May told us that his boss, Grant Perry, had a goal of becoming the world’s leading and largest professional camera producer and they’re more than on track. The company unveiled a dizzying array of new products and cinema, broadcast camera upgrades this year including the new low-cost PYXIS 6K up to the URSA Cine 17K.

Its new URSA Cine 12K is the updated flagship designed for high end production. The specifications are no less than spectacular (full 36x24mm full sensor, with 16-stops of dynamic range, a full range of lens mounting, expanded memory and plenty of industry-standard connections), allowing unprecedented flexibility in post.

This flexibility however comes at the expense of massive bandwidth and storage consumption, producing up to 5Gbps of content. The camera therefore has 8TB of built-in storage, allowing you to record 2 hours of RAW footage at full resolution. In either case, the images produced by this camera are not suitable for uploading to the cloud as the cost would be prohibitive. Hybrid storage architecture and Edge computing are imperative.

Hero image of Blackmagic Ursa Cine 12k

At the other side of the spectrum, there is clearly a push to produce smaller devices that still create very decent images. Germany’s PROTON Camera Innovations launched the PROTON CAM, which the company bills as the world’s smallest broadcast-quality camera. Measuring just 28mm x 28mm and weighing only 24 grams, PROTON CAM is tiny in size, but also incorporates market-leading specifications compared to other similar cameras. It uses 12-bit sensor technology and advanced FPGA to deliver unmatched high resolution and dynamic range, capturing details with exceptional clarity. It also grants a wide-angle view of up to 120 degrees and better low-light performance, without any image distortion, thus allowing broadcasters significant flexibility and creative scope in its deployment.

Germany’s PROTON Camera Innovations launched the PROTON CAM, billed as the world’s smallest broadcast-quality camera.

Also, we noticed the Insta360 X4 Action Camera launched at the show – a versatile 360-degree action camera featuring 8K video and AI-powered gesture control. The new 360 camera appears to be a dramatic upgrade over its predecessor, offering higher video resolution, a larger touchscreen, longer battery life, and a more rugged build than the X3.

Hybrid storage and Edge computing

Given increasing file formats, it is clear that the cloud as such is not suitable to store the entire stock of footage in full resolution. It is suitable for collaborative purposes using proxies, and it can be used to exchange a small selection of shots of high-resolution material, but expect the bulk of the storage to remain on premise.

This necessitates local storage and processing referred to as ‘Edge computing‘. The design pattern to create a storage setup using different types of storage to balance cost and performance, for example local storage on data tape (such as Archiware), local storage on spinning disc for editing, and cloud based storage for collaboration, is commonly referred to as ‘Hybrid Storage’. Companies like Sony (‘Creators’ Cloud‘), Backlight and Limecraft are good examples of cloud-native Media Asset Management (MAM) systems that connect to local storage, enabling for unlimited collaboration while connecting to local storage.

Good examples of Edge Computing are Jellyfish storage devices by other world computing using a licensed version of Kyno (developed by Lesspain Software, aquired by Signiant), and the considerable efforts by Blackmagic Design and Avid to reduce the time to edit for high resolution material.

But it is Blackmagic again that took the most challenging leap ahead; the company’s DaVinci Resolve is where all the creative work is done. Since it was purchased back in 2009, they have continually made it more useful and more powerful for everyone in the production arena. The latest version – Resolve 19 – includes the company’s new DaVinci Neural Engine AI tools and over 100 feature upgrades for the entire production team. Editors can work directly with transcribed audio to edit timeline clips while colorists can quickly and easily produce rich film like tones. VFX artists now have access to an expanded set of USD (universal scene description) and multipoly rotoscoping tools.

Users have a tough time “outgrowing” Resolve 19 because there are now tools even for high-end digital film production and live broadcasting. In addition, it integrates seamlessly with the Blackmagic Cloud for smooth, professional production for live and d content distribution.