Follow us on Instagram
Try our daily mini crossword
Play our latest news quiz
Download our new app on iOS/Android!

PLI awards seed grant funding to 14 cross-departmental research teams

Nassau Hall building in daylight.
Nassau Hall
Jean Shin / The Daily Princetonian

In its inaugural round of seed grant funding, the Princeton Language and Intelligence Initiative (PLI) awarded $798,000 to 14 different research projects that utilize artificial intelligence (AI) and large language models (LLMs). 

In September 2023, PLI launched in response to what director Sanjeev Arora called “revolutionary” progress in AI in the previous five years. To be used by Aug. 31, 2025, the seed grant funding will support the early stages of AI-based research projects across the humanities, social sciences, natural sciences, and engineering. 

ADVERTISEMENT

“The seed grants were always part of the conception [of PLI],” Arora said in an interview with The Daily Princetonian. “We should be supporting research going on in other disciplines, and people are very hungry to use large AI models in their research.”

LLMs are models that have been trained with a massive amount of textual data in order to perform natural language processing (NLP) tasks such as text recognition, translation, and text prediction.

PLI’s executive team evaluated 27 proposals from 20 different departments. Applicants submitted three slides describing their project’s application of large AI models. 

“The grant process was really easy, and I very much appreciate that, because in my field, I work on other grants for four or five months that are for far less [money],” Andrea DiGiorgio, a lecturer in the Princeton Writing Program, told the ‘Prince.’ 

Her seed grant project, “Impacts of Social Media on Wildlife Conservation,” builds off of three of her previous research projects to investigate how social media posts advertising conservationist causes with photos of wildlife “can inadvertently lead to negative outcomes with the public.”

“When people see these images, they want to then study orangutans and they want to then have an orangutan as a pet,” she explained. “We’re trying to find ways to still post and capture public attention, but not do so in a harsh way that inadvertently creates or exacerbates these problems for animals.” 

ADVERTISEMENT
ADVERTISEMENT

DiGiorgio started her project in 2016, when she individually analyzed and coded Facebook posts after spending a year researching orangutan populations in Indonesia on the island of Borneo. 

“Primatologists are not always the most tech-savvy, and at the time there just wasn’t as much that AI could do,” she said. DiGiorgio and her co-writer at the University of Miami later used MonkeyLearn to process 10,000 posts a month for free, but could not find funds for the technology to process larger amounts of data. 

“This grant will let us get back into bigger AI now,” she said. Moving forward, DiGiorgio intends to analyze data from more social media and tourism platforms, consider a greater number of animal species, hire undergraduate and graduate students to assist the project, and present their findings at the American Zoological Association Conference next year. 

Likewise, professor of linguistics and computer science Christiane Fellbaum GS ’80 and postdoctoral researcher Happy Buzaaba’s project “Infrastructure for African Languages” has brought together an international team to create “treebanks,” a body of data that annotates a language’s syntactic and morphological information. 

Subscribe
Get the best of ‘the Prince’ delivered straight to your inbox. Subscribe now »

“The whole idea of the project is to increase the representation of African languages in natural language processing research,” Buzaaba said. 

Fellbaum and Buzaaba hired three native speakers to annotate approximately 1,500 sentences for each of the 11 African languages in the project. Researchers have developed treebanks for well-studied languages, but this project has the potential to “test” the suitability of treebanks for understudied languages. 

“And it’s this universal scheme that is supposed to fit all languages. So the question is, will it fit these languages? And if so, linguists are going to be very happy,” Fellbaum said in an interview with the 'Prince.'

However, any “adjustment” to the treebank schema “should be seen positively, because it will tell us more about the richness of human language,” she added. 

For their project “Toward Foundation Models in Time Series,” computer science professor H. Vincent Poor, postdoctoral research associate Hao Huang, and Yuqi Nie GS intend to “establish the first open-source comprehensive foundation model specifically tailored for a broad spectrum of time series applications,” according to their proposal

“Most [important],” Huang said, “is trying to see whether we could make this great artificial intelligence more applicable in our daily life to contribute to a more sustainable world in different areas and domains,” including transportation, energy systems, weather predictions, and finance research. 

Huang and Nie told the ‘Prince’ that the PLI grant will offer them a platform to be acknowledged by external funding agencies as the project develops. 

“This is fully revolutionary,” Arora said of PLI’s research and outreach on campus. “I’m very thrilled that the University leadership recognized [AI technologies] and reacted quickly and is moving nimbly towards studying, leveraging, and expanding the use of AI campus.”

Elisabeth Stewart is an assistant News editor for the ‘Prince.’

Please send corrections to corrections[at]dailyprincetonian.com.