HITS

HITS Spring: Sefleuria’s CEO Explores How the M&E Industry Can Build a Framework for Ethical AI

Artificial Intelligence (AI) has come a long way since its early introduction into mainstream information technology (IT).

With increased usage and use cases, it is critical for organizations to infuse ethics into the decision-making being performed across any AI integration, according to Jessica Graves, founder and CEO of Sefleuria, a data science consulting firm for luxury companies.

During the HITS session “Building a Framework for Ethical AI” at the Hollywood Innovation and Transformation Summit (HITS) on May 19, Graves discussed the frameworks driving AI and how they integrate with the algorithms driving increased usage.

She described Sefleuria as a “computational and spiritual technology company” that looks at “machine intelligence a little bit different,” telling attendees: “We do believe that there are aspects of what we learn from spirituality and other systems that we could potentially start experimenting with to see what it means to combine these two types of intelligence.”

Sefleuria has “worked both on an individual level with people who are building companies and we also work quite extensively with corporate and startups who are trying to build alternative futures,” she said.

She has also worked for Microsoft’s machine learning practice, she pointed out, noting she’s “seen machine learning from a couple of different angles” and specializes in “helping commercialize machine learning research into applications.”

In the industry, there is often a “kind of missing bit of commercialization that’s happening or there’s technology that has no specific thing that it needs to do or people are so application-focused that they actually miss out on all the things that machines can do,” she said.

There are also “a few kind of myths, especially around when you start to move into ethical AI that you have to think about where there’s a genuine belief that if only the data was perfect, this machine learning algorithm I launch would also be perfect and if we just tweak the data a little bit then we should be able to get better results,” she pointed out.

That sometimes “can be very true,” Graves said, noting she has invested in companies “where that’s almost exactly what they’re doing is looking for exactly the right types of data to train another algorithm how to perform better.”

But, she warned: “At the same time, it’s really important to understand that the types of algorithms that we deploy can look [like] perfect, amazing, balanced, wonderful data [and] still produce biased outputs and results…. In the very general sense of the word bias, you can have great data and, by the nature of how some kind of model was built, it can still give you extremely skewed outputs.”

At this time, she went on to say, “we’re at such a place of sophistication … you could take the same data set and do so many other things with it now.” But it’s important to keep in mind “who is this going to affect [and] how are they going to be included or excluded by the decisions that are made behind the scenes?” she noted.

What is also important to keep in mind, she said, is that people are going to be impacted by the decisions made and “we are stewards of their data.”

She added: “The challenge is to not turn away from any of the care” taken by organizations when using AI. “As much care as we give to talent … as much care as we give to editing, as much care as we give to all the other things that make especially the entertainment industry work so well for people and allow it to shift culture for people, that much care is what we should be putting into figuring out all of these trade-offs that are being made.”

With that in mind, she said, some of the key questions that we should be asking are: “Whose data is this? Where does it come from? What person is behind this? Can I protect that person? Can I serve that person? And who is being harmed?”

Meanwhile, “as we start to make these trade-offs,” it is important to “not shy away from how insanely complicated it is and also both not oversimplify how easy machine learning is and also not make it inaccessible. Everyone should have a voice here,” she said.

Concluding, she said: “My challenge for the industry is to really think about how are we feeding people to perceive both machine learning and AI in general and how they feel about that, how they think about it, what they believe is possible with it and then, second, how does what we do influence what peoples’ express desires are and how does that feed back into what we’re seeing and measuring?”

To view the entire presentation, click here.

The Hollywood Innovation and Transformation Summit event was produced by MESA in association with the Hollywood IT Society (HITS), Media & Entertainment Data Center Alliance (MEDCA), presented by ICVR and sponsored by Genpact, MicroStrategy, Whip Media, Convergent Risks, Perforce, Richey May Technology Solutions, Signiant, Softtek, Bluescape, Databricks, KeyCode Media, Metal Toad, Shift, Zendesk, EIDR, Fortinet, Arch Platform Technologies and Amazon Studios.