Anthony Ha, a prominent figure in technology journalism, currently serves as the weekend editor for TechCrunch, a leading technology news website. His impressive career includes a staff or contributing role at nine different outlets — from The Washington Post to The Atlantic — reflecting his adaptability and dedication to journalism.
Ha’s career in journalism began as a local government reporter at the Hollister Free Lance. There, he honed his craft in both narrative writing and investigative journalism. That transformational experience set him up to move into the global tech industry, where he later became a tech reporter at Adweek. Over the course of his tenure—reporting on consumer technology trends and innovations—he positioned himself as a trustworthy, authoritative voice and industry thought leader.
After his stint at Adweek, Ha transitioned to senior editor at VentureBeat. He assembled a team of talented journalists and deepened the publication’s reputation for producing thoughtful, engaging analysis of the rapidly evolving tech landscape. His leadership there likely made him an even sharper editorial talent and more informed industry player.
Beyond his editorial positions, Ha has been vice president of content for a venture capital fund. This is the role that allowed him to develop profound insights on the intersection of technology, investment, and entrepreneurship. His experiences in these roles inform his reporting and analysis capabilities.
Now based in New York City, Ha continues to shape the future of tech journalism with his editorial efforts at TechCrunch. He’s highly regarded for taking on the thorny issues of the tech sector, including the threats of artificial intelligence.
Discussion surrounding AI has intensified, particularly regarding its tendency to produce “plausible but false statements generated by language models,” as noted by OpenAI. They observe that such inaccuracies “remain a fundamental challenge for all large language models.” Researchers emphasize that if “the main scoreboards keep rewarding lucky guesses, models will keep learning to guess,” leading to potential misinformation.
Given these issues, researchers are calling for a rethinking of what’s tested through these models. They argue that “in the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say ‘I don’t know’.” Such a singular approach would only continue to embed biases in AI systems.
OpenAI suggests that evaluative methods should “penalize confident errors more than you penalize uncertainty,” while offering “partial credit for appropriate expressions of uncertainty.” As their primary contribution, they advocate for an overhaul of the common accuracy-based evaluations to penalize guesswork in AI answers.