Exploring Generative AI for Fun and Profit
Generative AI often has serious issues. It feeds on unapproved creative work, harboring biases, and uses huge energy. Despite these flaws, it’s powerful for prototyping new tools.
I saw this at Sundai Club, a monthly generative AI hackathon near MIT. The group let me join a session focused on tools for journalists. Sundai Club is supported by Æthos, a nonprofit that promotes social AI use.
The club includes MIT and Harvard students, professional developers, and a military member. Each event starts with brainstorming, then selecting a project to build.
Journalism pitches included using multimodal models for TikTok political posts, generating freedom of information requests, and summarizing court hearing videos for local news.
The group decided to build a tool for reporters covering AI to find interesting papers on the Arxiv server. My input likely influenced this choice since I prioritize scouring Arxiv for research.
Once the goal was set, coders used the OpenAI API to create a word embedding of Arxiv AI papers. This allowed them to analyze the data for relevant terms and explore research relationships.
They also used Reddit threads and Google News searches to visualize research papers, Reddit discussions, and related news.
The prototype, AI News Hound, is basic but shows how large language models help mine information. Here’s a screenshot using the term "AI agents." Green squares near news articles and Reddit clusters represent research papers for potential inclusion in an article on AI agents.
I saw this at Sundai Club, a monthly generative AI hackathon near MIT. The group let me join a session focused on tools for journalists. Sundai Club is supported by Æthos, a nonprofit that promotes social AI use.
The club includes MIT and Harvard students, professional developers, and a military member. Each event starts with brainstorming, then selecting a project to build.
Journalism pitches included using multimodal models for TikTok political posts, generating freedom of information requests, and summarizing court hearing videos for local news.
The group decided to build a tool for reporters covering AI to find interesting papers on the Arxiv server. My input likely influenced this choice since I prioritize scouring Arxiv for research.
Once the goal was set, coders used the OpenAI API to create a word embedding of Arxiv AI papers. This allowed them to analyze the data for relevant terms and explore research relationships.
They also used Reddit threads and Google News searches to visualize research papers, Reddit discussions, and related news.
The prototype, AI News Hound, is basic but shows how large language models help mine information. Here’s a screenshot using the term "AI agents." Green squares near news articles and Reddit clusters represent research papers for potential inclusion in an article on AI agents.