Release Notes, Features

21.4 Product Update

Charlotte Coppejans
December 24, 2021

As 2021 draws to an end, it’s time to take a step back and review the most important recent changes to the Limecraft Platform. Some prominently visible, some more subtle. All included by our standard offering, made available for your convenience as part of our continued commitment to give video professionals the best possible workspace.

Noteworthy efficiency improvements include the use of custom dictionaries, reducing the Word Error Rate of automatic transcription by 50%, and more intelligent Natural Language Processing (NLP), allowing better automatic spotting of subtitles. The search engine has been redesigned and improved, allowing you to instantly retrieve the right fragments regardless of the size of your database.

We’ve included support for synchronisation of audio and video using Linear or Longitudinal Timecode (LTC). Also, we’ve taken further steps to unlock the power of the underlying workflow engine using no-code/low-code workflows. Apart from these, we’ve shipped dozens of small improvements including a cleaner layout of the library, more accessible action menus, improved column handling and custom fields in the form of timecode.

Using the Language Locale and Custom Dictionaries

Obviously, when using automatic speech recognition, the more words that are not properly recognised, the more time you need for post-editing the transcript. Besides this, we discovered that the number one source of frustration for journalists and language professionals, is repeated wrong spelling or recognition of words.

A word may not be recognised at all because the Automatic Speech Recognition (ASR) engine has not been trained to do so, or it may be substituted by a word with a different meaning because it sounds somehow similar. Or simply because words, and especially proper names, can be spelled differently according to their meaning or context. For example, you may want to enforce a particular language flavour (or flavor, for the sake of the argument), depending on your target audience.

First of all, before you start using custom dictionaries, you should consider setting the language locale. It can be used to enforce consistent use of, for example, American or British English.

Using the locale to improve language consistency

When you are sure the language settings are correct and you want to further improve the transcription efficiency by reducing the Word Error Rate (WER), we advise you to use ‘Custom Dictionaries’. Custom dictionaries are lists of words are preferentially picked up by the ASR engine. If you do so, we recommend to keep those lists as small as possible to avoid false positives. For example, if you are working in sports, you may want to use a separate dictionaries for each sport. Using custom dictionaries, you can reduce the WER by half, thereby reducing the time to post-edit the transcript even more.

Custom dictionaries can be created and managed in the transcription settings (so you need a sufficient level of authorisation), and they can be selected when you start a transcription job (see screenshot above).

More info about the use of Custom Dictionaries on the knowledge base.

Improved Search Experience – Less Clicking, More Creating

As the size of your libraries is growing, so does the need for more efficient search. In the latest release, we upgraded the search engine and you can refine your search query, intuitively and step by step, without having to construct complex search queries.

It allows users to efficiently identify and spot the right clips and subclips without having to watch entire pieces of content, thus drastically improving the efficiency of archive producers and journalists.

More info about the use of Custom Dictionaries on the knowledge base.

Smarter AI delivers better Subtitles

The last couple of months, working closely with a number of expert users, we’ve improved various aspects of automatic speech recognition and the Natural Language Processing (NLP) used to create broadcast-grade subtitles. The results are convincing, just have a look at this ?.

In the clip we exposed the unpolished results of AI-generated subtitles. In the A/B test, the middle section was the state of the art 12 months ago; the bottom section is what we currently deliver. It is challenging fragment (courtesy of BBC), with background noise, soundtrack, cross-talk, etc. Nevertheless the speaker segmentation, the word error rate (WER) and the interpunction are very good, as well as the styling and the timing of the subtitles is excellent.

https://youtu.be/_bYL2pVLTjk
AI-generated subtitles, showing the current State of the Art (below) versus 12 months ago (middle) – Pictures

As always, we still advise to engage with a professional subtitler for polishing and fine-tuning the results, but as you can clearly see the result of automatic transcription and spotting of subtitles comes quite close to the desired result. For professionally produced audio, most of our users report a post-editing time between 1:2 to 1:4 (2 to 4 minutes per minute), which is a massive time saver compared to a conventional process based on manual work.

If you would like to give it a try, feel free to get in touch or to give it a try straight away.

LTC-based Audio Synchronisation

The sound of linear timecode (‘LTC’) is a jarring and distinctive noise, which is often used as sound-effects shorthand to imply telemetry or computers.

https://soundcloud.com/gankuma/ltc30df005945?utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing

Yet LTC serves a purpose. As several types of camera’s don’t support accurate timecode, manual synchronisation of audio and video may take a lot of time in post-production. By recording a LTC signal on both one of the audio tracks and on the audio track of the camera, audio and video can be automatically and robustly synchronised during ingest.

Using the latest release of Limecraft Edge, you can extract the LTC timecode from the audio and video when backing up, it can be used for automatically synchronising the media before archiving or transferring them to the edit suite.

More information on using Edge for audio synchronisation on our knowledge base.

No-code and Low-code Workflows

As a last point, we want to emphasise our engagement to enable no-code/low-code workflows. Power users should be able to set up a new workflows in minutes, regardless the details of the specific use cases which may including ingest, transfer to post-production, transcription and subtitling, or publishing to VOD platforms. Hence, in an ideal world, we would like you to be able to configure a new workflow with a few easy clicks.

However, there is a delicate balance between functionality and flexibility on the one hand side, and complexity on the other side. In other words, the more complex the workflow – think frame rate conversions, audio channel mapping, watermarks, etc. – the more difficult it becomes to hide the underlying technical complexity. At Limecraft, we will always strive for an optimal balance between superior functionality and a decent level of usability. No-code if possible, low-code when necessary.

In a series of product updates in the coming months, we will disclose more functionality step by step. Using Limecraft, there are an amazing spectrum of functionalities and tuning parameters at your disposition. Going forward, we will make these configurable using the production settings, accessible through the API, and eventually visible in the user interface.

Ingest Templates

As a first example, let’s look at the ingest processes. Using Limecraft Edge, you can now create one or more ingest template. It gives authorised and skilled users an unprecedented level of control and access to a very rich set processing parameters at the same time. More importantly, it completely mitigates the risk of errors due to wrong settings of the ingest process.

If you are interested to know more, here is the full article on our knowledge base.

Ingest templates allow you to hide the technical complexities of file formats conversions and encoding parameters

Limecraft Tools

For those that have particular requirements and that want to go beyond the standard options made available by Limecraft Edge and Flow or integrate third-party microservices, we also offer al low-code alternative in the form of a command line interface (‘Limecraft Tools’).

Limecraft Tools can be used to launch manipulation and ingest processes directly, or to behave like a watch folder. Also Limecraft Tools are extensively documented on the knowledge base.