Angling to stay relevant in the exploding AI field, Meta is launching a new organization, the Open Innovation AI Research Community, to foster what it describes as “transparency, innovation and collaboration” among AI researchers.
Initially, the focus of the group will be the privacy, safety and security of large language models such as OpenAI’s ChatGPT; giving input into the refinement of AI models; and setting an agenda for future research. Meta says that it expects its own researchers to participate in the organization, but that the Open Innovation AI Research Community will be “member-led,” with Meta’s AI R&D group, Meta AI, serving only as a “facilitator.”
“The group will become a community of practice championing large open-source foundation models where partners can collaborate and engage with each other, share learnings and raise questions on how to build responsible and safe foundation models,” Meta writes in a blog post. “They’ll also accelerate training of the next generation of researchers.”
Meta intends to sponsor a series of workshops focused on “critical open research questions” and “developing guidelines for responsible open source model development and release.” But details beyond that remain vague. Meta says the Open Innovation AI Research Community might eventually have a website, social channels for collaborating and research submissions to academic conferences, but it doesn’t commit to any of this.
Members of the Open Innovation AI Research Community are presumably on the hook for funding their work. Meta didn’t indicate that it’ll set aside capital or compute for the group’s efforts — in fairness, probably to avoid the perception of undue influence. But that’s a tough sell off the bat, factoring in the steep expenses associated with AI research.
The Open Innovation AI Research Community, frankly, comes across as performative from a company who’s repeatedly flirted with controversy where AI’s concerned.
Late last year, Meta was forced to pull an AI demo after it wrote racist and inaccurate scientific literature. Reports have characterized Meta’s AI ethics team as largely toothless and the anti-AI-bias tools it’s released as “completely insufficient.” Meanwhile, academics have accused Meta of exacerbating socioeconomic inequalities in its ad-serving algorithms and of showing a bias against Black users in its automated moderation systems.
Will the Open Innovation AI Research Community change all this? It seems unlikely. Meta’s encouraging “professors at accredited universities” with “relevant experience with AI” to participate, but this writer wonders why they would, given the wellspring of open machine learning research communities unaffiliated with any Big Tech company.
Perhaps I’ll be proven wrong. Perhaps Meta’s Open Innovation AI Research Community will indeed deliver on its promise, creating “a set of positive dynamics to foster more robust and representative models,” as Meta writes. But I question the sincerity and level of devotion, here, on the part of Meta — particularly considering how few resources have been pledged toward the effort from the outset.
The deadline to apply for the Open Innovation AI Research Community is September 10. Meta says that it welcomes applicants from “diverse research disciplines” and “technical capabilities to pursue research,” and that more than one participant from the same university may apply.