Is higher ed ready for smart video?

Artificial intelligence is the key to next-generation video capture

There is nothing very “smart” about my video collection. I have the expected classics that any film minor would have acquired, dusted and ready to stream at a moment’s notice.

If my library had a card catalog, nine out of 10 drawers would be dedicated to titles that start with Star or end in Man, and the analytics would be reduced to “Which Universe—
Marvel or DC?”

The demand from universities for smarter video systems is vast by contrast. The massive growth of on-campus video is increasing the desire for innovations that naturally leverage the benefits of artificial intelligence to get more out of the fastest-growing data type measured either by file size or network traffic on campus.

Most traditional phases of academic video technology are incorporating AI in their design with impact on the future of capture, curation, accessibility and retrieval technology.    

Smart tracking

One of the earliest concerns among academic institutions rolling out video capture systems was keeping the instructor in the frame without CNN-level spending on camera 
operators.

First and second generations of automatic tracking camera systems have depended on the professor wearing—or standing on—specific instruments that cue the camera, while other products use a combination of cameras to accomplish the task.

Dan Freeman, president and CTO of VDO360, has incorporated AI to reduce the cost and increase the accuracy of the automatic tracking system of his latest classroom camera systems. Other manufacturers have AI-based refinements on the way.

“We have been able to greatly augment intelligence of the onboard software availability of smaller, cheaper and more powerful processors in the pivoting camera mounts themselves,” says Freeman.

AI-based systems such as Freeman’s marshal independent algorithms for movement, as well as shape and face detection against a single camera feed, promising lower cost with increased accuracy and flexibility.

Filling the gap

AI is also contributing to progress in accessibility. The last decade has seen growth in the use of text-captioning services for video in an effort to comply with accessibility laws.  However, many institutions fail to meet the requirement to provide audio descriptions for all visuals.

AI may be the best way to fill the compliance gap that applies to every window that is streamed.

In one of the most famous examples of AI applied to video, IBM’s cloud video group used the Watson “supercomputer” to generate a rally-by-rally summary description from all video recorded at the US Open tennis tournament. The stunning result is a polished visual and text summary of every match played.

The same algorithms used to identify, quantify and contextualize the lobs, aces, dives and fist pumps into succinct match descriptions translate directly to higher education applications with video.

These algorithms will soon be applied to the far easier challenge of breaking down the gist of charts, animations, whiteboards and PowerPoint slides for the visually impaired.  

Fast forward

Smarter automatic indexing of institutional video is still the top priority of educators like CIO Joseph Collins of North Hennepin Community College. “Faculty and students want 
to retrieve the exact section of the video from lectures, simulation systems and faculty creation that applies to their current need.”

Collins envisions a near future where AI is harnessed to automatically match video assets at the thematic level for easy retrieval by busy students. “Key information is all online already—it needs to be automatically integrated as much as possible to the videos for our students to access it when they need it.”

Which, for the record, is exactly how AI manages video for Bruce Wayne, Tony Stark and T’Challa.  


Sean Brown, with years of experience in academic video production, is a consultant with Minneapolis-based Contegy Digital.

Categories:

Most Popular