<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Blog | AI Alliance</title>
    <link>https://thealliance.ai/blog</link>
    <description>Perspectives, news, and technical reports from our community of technology creators, developers and adopters.</description>
    <language>en</language>
    <pubDate>Thu, 02 Apr 2026 11:30:40 GMT</pubDate>
    <dc:date>2026-04-02T11:30:40Z</dc:date>
    <dc:language>en</dc:language>
    <item>
      <title>AI Alliance Urges Lawmakers to Rethink the NY RAISE Act | Blog</title>
      <link>https://thealliance.ai/blog/ai-alliance-urges-lawmakers-to-rethink-the-ny-raise-act</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/ai-alliance-urges-lawmakers-to-rethink-the-ny-raise-act" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/Printscreen%20of%20AI%20Alliance%20Association%20Statement%20in%20Opposition%20to%20NY%20RAISE%20Act..png" alt="Printscreen of AI Alliance Association Statement in Opposition to NY RAISE Act." class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;The AI Alliance Association has submitted a letter to New York State legislative leaders expressing strong opposition to the proposed&amp;nbsp;&lt;a href="https://legislation.nysenate.gov/pdf/bills/2025/s6953b"&gt;&lt;span&gt;Responsible AI Safety and Education (RAISE) Act (A.6453/S.6953)&lt;/span&gt;&lt;/a&gt;. The letter warns that the bill would unintentionally undermine open source AI innovation in New York by imposing impractical obligations on model developers, introducing significant legal uncertainty, and effectively blocking the release of frontier open source models in the state.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/ai-alliance-urges-lawmakers-to-rethink-the-ny-raise-act" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/Printscreen%20of%20AI%20Alliance%20Association%20Statement%20in%20Opposition%20to%20NY%20RAISE%20Act..png" alt="Printscreen of AI Alliance Association Statement in Opposition to NY RAISE Act." class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;The AI Alliance Association has submitted a letter to New York State legislative leaders expressing strong opposition to the proposed&amp;nbsp;&lt;a href="https://legislation.nysenate.gov/pdf/bills/2025/s6953b"&gt;&lt;span&gt;Responsible AI Safety and Education (RAISE) Act (A.6453/S.6953)&lt;/span&gt;&lt;/a&gt;. The letter warns that the bill would unintentionally undermine open source AI innovation in New York by imposing impractical obligations on model developers, introducing significant legal uncertainty, and effectively blocking the release of frontier open source models in the state.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=44725920&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fthealliance.ai%2Fblog%2Fai-alliance-urges-lawmakers-to-rethink-the-ny-raise-act&amp;amp;bu=https%253A%252F%252Fthealliance.ai%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Insights</category>
      <pubDate>Tue, 18 Nov 2025 21:19:00 GMT</pubDate>
      <guid>https://thealliance.ai/blog/ai-alliance-urges-lawmakers-to-rethink-the-ny-raise-act</guid>
      <dc:date>2025-11-18T21:19:00Z</dc:date>
      <dc:creator>Admin</dc:creator>
    </item>
    <item>
      <title>LLM-as-a-Judge Without the Headaches: EvalAssist Brings Structure and Simplicity to the Chaos of LLM Output Review | Blog</title>
      <link>https://thealliance.ai/blog/llm-as-a-judge-without-the-headaches-evalassist-br</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/llm-as-a-judge-without-the-headaches-evalassist-br" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/llm-as-a-judge-without-the-headaches-evalassist.png" alt="A blue circle with a 3D triangle and the words &amp;amp;quot;EvalAssist&amp;amp;quot; and &amp;amp;quot;LL." class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;You have generated a large batch of model outputs from a mixture of off-the-shelf and fine-tuned LLMs and now you need to evaluate them at scale. But how do you know which ones are actually meeting the expectations of your use case?While benchmarks and automated metrics are a great tool to validate initial usefulness of a model or prompts for your use case, they require ground truth and often miss the nuance that matters in many real-world scenarios, for example, think about evaluating chat bot responses for politeness, fairness, tone, clarity, inclusiveness etc. Most teams turn to human evaluation, but manual review doesn’t scale very well. That’s where large language models as evaluators (LLM-as-a-Judge) come into play. This popular approach can help accelerate human review, assuming definition of evaluation criteria are aligned well with human intentions and results are trustworthy.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/llm-as-a-judge-without-the-headaches-evalassist-br" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/llm-as-a-judge-without-the-headaches-evalassist.png" alt="A blue circle with a 3D triangle and the words &amp;amp;quot;EvalAssist&amp;amp;quot; and &amp;amp;quot;LL." class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;You have generated a large batch of model outputs from a mixture of off-the-shelf and fine-tuned LLMs and now you need to evaluate them at scale. But how do you know which ones are actually meeting the expectations of your use case?While benchmarks and automated metrics are a great tool to validate initial usefulness of a model or prompts for your use case, they require ground truth and often miss the nuance that matters in many real-world scenarios, for example, think about evaluating chat bot responses for politeness, fairness, tone, clarity, inclusiveness etc. Most teams turn to human evaluation, but manual review doesn’t scale very well. That’s where large language models as evaluators (LLM-as-a-Judge) come into play. This popular approach can help accelerate human review, assuming definition of evaluation criteria are aligned well with human intentions and results are trustworthy.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=44725920&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fthealliance.ai%2Fblog%2Fllm-as-a-judge-without-the-headaches-evalassist-br&amp;amp;bu=https%253A%252F%252Fthealliance.ai%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Insights</category>
      <pubDate>Tue, 18 Nov 2025 13:00:00 GMT</pubDate>
      <guid>https://thealliance.ai/blog/llm-as-a-judge-without-the-headaches-evalassist-br</guid>
      <dc:date>2025-11-18T13:00:00Z</dc:date>
      <dc:creator>Admin</dc:creator>
    </item>
    <item>
      <title>The AI Alliance releases new AI-powered programming language and industrial AI agent framework, adds new Japanese members, and launches AI Alliance Japan</title>
      <link>https://thealliance.ai/blog/the-ai-alliance-releases-new-ai-powered-programmin</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/the-ai-alliance-releases-new-ai-powered-programmin" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/ai-alliance-illustration-1.webp" alt="A square with a smaller square inside, set against a blue background" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;em&gt;Tokyo, Japan&lt;/em&gt;, June 26, 2025 - Today the AI Alliance, a global non-profit consortium dedicated to open-source AI innovation, announced Dana, a new open-source AI-powered programming language, and the OpenDXA agent framework for industrial AI. The non-profit also launched AI Alliance Japan, a new initiative to support AI sovereignty and industrial AI in Japan, and welcomed three new Japanese members: Mitsubishi Electric, National Institute of Informatics, and SAKURA internet Inc.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/the-ai-alliance-releases-new-ai-powered-programmin" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/ai-alliance-illustration-1.webp" alt="A square with a smaller square inside, set against a blue background" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;em&gt;Tokyo, Japan&lt;/em&gt;, June 26, 2025 - Today the AI Alliance, a global non-profit consortium dedicated to open-source AI innovation, announced Dana, a new open-source AI-powered programming language, and the OpenDXA agent framework for industrial AI. The non-profit also launched AI Alliance Japan, a new initiative to support AI sovereignty and industrial AI in Japan, and welcomed three new Japanese members: Mitsubishi Electric, National Institute of Informatics, and SAKURA internet Inc.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=44725920&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fthealliance.ai%2Fblog%2Fthe-ai-alliance-releases-new-ai-powered-programmin&amp;amp;bu=https%253A%252F%252Fthealliance.ai%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Insights</category>
      <pubDate>Tue, 18 Nov 2025 12:45:00 GMT</pubDate>
      <guid>https://thealliance.ai/blog/the-ai-alliance-releases-new-ai-powered-programmin</guid>
      <dc:date>2025-11-18T12:45:00Z</dc:date>
      <dc:creator>Admin</dc:creator>
    </item>
    <item>
      <title>DoomArena: A Security Testing Framework for AI Agents | Blog</title>
      <link>https://thealliance.ai/blog/doomarena-a-security-testing-framework-for-ai-agen</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/doomarena-a-security-testing-framework-for-ai-agen" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/ai-alliance-illustration-4.png" alt="Abstract geometric shapes in various colors, including orange, blue, and gray" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;As AI agents become increasingly autonomous and widely deployed, they introduce new attack surfaces and amplify existing security vulnerabilities. The DoomArena framework, developed by researchers at ServiceNow Research, addresses critical gaps in current AI agent security testing approaches.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/doomarena-a-security-testing-framework-for-ai-agen" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/ai-alliance-illustration-4.png" alt="Abstract geometric shapes in various colors, including orange, blue, and gray" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;As AI agents become increasingly autonomous and widely deployed, they introduce new attack surfaces and amplify existing security vulnerabilities. The DoomArena framework, developed by researchers at ServiceNow Research, addresses critical gaps in current AI agent security testing approaches.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=44725920&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fthealliance.ai%2Fblog%2Fdoomarena-a-security-testing-framework-for-ai-agen&amp;amp;bu=https%253A%252F%252Fthealliance.ai%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Insights</category>
      <pubDate>Tue, 18 Nov 2025 12:30:00 GMT</pubDate>
      <guid>https://thealliance.ai/blog/doomarena-a-security-testing-framework-for-ai-agen</guid>
      <dc:date>2025-11-18T12:30:00Z</dc:date>
      <dc:creator>Admin</dc:creator>
    </item>
    <item>
      <title>Mastering Data Cleaning for Fine-Tuning LLMs and RAG Architectures | Blog</title>
      <link>https://thealliance.ai/blog/mastering-data-cleaning-for-fine-tuning-llms-and-r</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/mastering-data-cleaning-for-fine-tuning-llms-and-r" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/mastering-data-cleaning-for-fine-tuning-llms-cover.png" alt="Mastering data cleaning graphic with bucket" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;Introduction&lt;/h2&gt; 
&lt;p&gt;Data quality is paramount in AI, especially for advanced applications like fine-tuning Large Language Models (LLMs) and implementing Retrieval-Augmented Generation (RAG) architectures. These systems thrive on structured, high-quality datasets to generate accurate, contextually relevant outputs. Poor-quality data, on the other hand, can result in hallucinations in LLM outputs, irrelevant document retrievals, or even significant biases that erode trust in AI systems.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/mastering-data-cleaning-for-fine-tuning-llms-and-r" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/mastering-data-cleaning-for-fine-tuning-llms-cover.png" alt="Mastering data cleaning graphic with bucket" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;Introduction&lt;/h2&gt; 
&lt;p&gt;Data quality is paramount in AI, especially for advanced applications like fine-tuning Large Language Models (LLMs) and implementing Retrieval-Augmented Generation (RAG) architectures. These systems thrive on structured, high-quality datasets to generate accurate, contextually relevant outputs. Poor-quality data, on the other hand, can result in hallucinations in LLM outputs, irrelevant document retrievals, or even significant biases that erode trust in AI systems.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=44725920&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fthealliance.ai%2Fblog%2Fmastering-data-cleaning-for-fine-tuning-llms-and-r&amp;amp;bu=https%253A%252F%252Fthealliance.ai%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Insights</category>
      <pubDate>Tue, 18 Nov 2025 12:00:00 GMT</pubDate>
      <guid>https://thealliance.ai/blog/mastering-data-cleaning-for-fine-tuning-llms-and-r</guid>
      <dc:date>2025-11-18T12:00:00Z</dc:date>
      <dc:creator>Admin</dc:creator>
    </item>
    <item>
      <title>GEO-Bench 2: From Performance to Capability, Rethinking Evaluation in Geospatial AI | Blog</title>
      <link>https://thealliance.ai/blog/geo-bench-2-from-performance-to-capability-rethinking-evaluation-in-geospatial-ai</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/geo-bench-2-from-performance-to-capability-rethinking-evaluation-in-geospatial-ai" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/geo-bench-2-from-performance-geobench.png" alt="Earth from space with the sun rising over the horizon." class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;A New Era for Geospatial AI&lt;/h2&gt; 
&lt;p&gt;Geospatial Foundation Models (GeoFMs) are large-scale AI models trained on diverse Earth observation data to support multiple geospatial tasks. They are transforming how we understand and manage our planet. But as these models grow in complexity and capability, one question becomes critical:&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/geo-bench-2-from-performance-to-capability-rethinking-evaluation-in-geospatial-ai" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/geo-bench-2-from-performance-geobench.png" alt="Earth from space with the sun rising over the horizon." class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;A New Era for Geospatial AI&lt;/h2&gt; 
&lt;p&gt;Geospatial Foundation Models (GeoFMs) are large-scale AI models trained on diverse Earth observation data to support multiple geospatial tasks. They are transforming how we understand and manage our planet. But as these models grow in complexity and capability, one question becomes critical:&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=44725920&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fthealliance.ai%2Fblog%2Fgeo-bench-2-from-performance-to-capability-rethinking-evaluation-in-geospatial-ai&amp;amp;bu=https%253A%252F%252Fthealliance.ai%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>A New Era for Geospatial AI</category>
      <category>Insights</category>
      <pubDate>Wed, 15 Oct 2025 17:45:00 GMT</pubDate>
      <guid>https://thealliance.ai/blog/geo-bench-2-from-performance-to-capability-rethinking-evaluation-in-geospatial-ai</guid>
      <dc:date>2025-10-15T17:45:00Z</dc:date>
      <dc:creator>Admin</dc:creator>
    </item>
    <item>
      <title>From Semiconductor to Maritime: A Blueprint for Domain-Specific AI in Safety-Critical Industries | Blog</title>
      <link>https://thealliance.ai/blog/from-semiconductor-to-maritime-a-blueprint-for-dom2</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/from-semiconductor-to-maritime-a-blueprint-for-dom2" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/from-semiconductor-to-maritime-1.png" alt="Image for From semiconductor to maritime blog. IMAGE FOR AI ALLIANCE USE ONLY." class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Maritime shipping is responsible for transporting nearly 80% of global trade, operating across ocean environments that leave little margin for error. Each voyage must contend with severe weather and congested sea lanes, while maintaining compliance with international regulatory frameworks such as The International Regulations for Preventing Collisions at Sea (COLREGs). In this context, even a single navigational misjudgment has the potential to endanger human life and cause large-scale environmental harm.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/from-semiconductor-to-maritime-a-blueprint-for-dom2" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/from-semiconductor-to-maritime-1.png" alt="Image for From semiconductor to maritime blog. IMAGE FOR AI ALLIANCE USE ONLY." class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Maritime shipping is responsible for transporting nearly 80% of global trade, operating across ocean environments that leave little margin for error. Each voyage must contend with severe weather and congested sea lanes, while maintaining compliance with international regulatory frameworks such as The International Regulations for Preventing Collisions at Sea (COLREGs). In this context, even a single navigational misjudgment has the potential to endanger human life and cause large-scale environmental harm.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=44725920&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fthealliance.ai%2Fblog%2Ffrom-semiconductor-to-maritime-a-blueprint-for-dom2&amp;amp;bu=https%253A%252F%252Fthealliance.ai%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Science</category>
      <category>Insights</category>
      <pubDate>Wed, 08 Oct 2025 17:45:00 GMT</pubDate>
      <guid>https://thealliance.ai/blog/from-semiconductor-to-maritime-a-blueprint-for-dom2</guid>
      <dc:date>2025-10-08T17:45:00Z</dc:date>
      <dc:creator>Admin</dc:creator>
    </item>
    <item>
      <title>Building AI Agents to Real-World Use Cases | Blog</title>
      <link>https://thealliance.ai/blog/building-ai-agents-to-real-world-use-cases</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/building-ai-agents-to-real-world-use-cases" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/ai-alliance-illustration-8.png" alt="A geometric composition featuring a blue square, black rectangle, and white lines." class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;Powered by Gofannon and AgentLabUI&lt;/h2&gt; 
&lt;p&gt;At The AI Alliance, we’re advancing open-source projects that make AI more practical, collaborative, and impactful across industries. Two of these projects are Gofannon, a set of agent tools, and AgentLabUI, a practitioner-focused environment where technical teams can create, tune, and deploy AI agents.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/building-ai-agents-to-real-world-use-cases" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/ai-alliance-illustration-8.png" alt="A geometric composition featuring a blue square, black rectangle, and white lines." class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;Powered by Gofannon and AgentLabUI&lt;/h2&gt; 
&lt;p&gt;At The AI Alliance, we’re advancing open-source projects that make AI more practical, collaborative, and impactful across industries. Two of these projects are Gofannon, a set of agent tools, and AgentLabUI, a practitioner-focused environment where technical teams can create, tune, and deploy AI agents.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=44725920&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fthealliance.ai%2Fblog%2Fbuilding-ai-agents-to-real-world-use-cases&amp;amp;bu=https%253A%252F%252Fthealliance.ai%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>AI Agents</category>
      <category>Insights</category>
      <pubDate>Wed, 01 Oct 2025 17:45:00 GMT</pubDate>
      <guid>https://thealliance.ai/blog/building-ai-agents-to-real-world-use-cases</guid>
      <dc:date>2025-10-01T17:45:00Z</dc:date>
      <dc:creator>Admin</dc:creator>
    </item>
    <item>
      <title>AI Alliance x National AI Research Resource Pilot Deep Partnership Program | Blog</title>
      <link>https://thealliance.ai/blog/ai-alliance-x-national-ai-research-resource-pilot-</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/ai-alliance-x-national-ai-research-resource-pilot-" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/ai-alliance-photo-block-9_16x9.avif" alt="Three people in an office setting, discussing ideas" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;The AI Alliance is proud to announce our participation in the National AI Research Resource (NAIRR) Pilot Deep Partnership program. This matters for you if you’re an AI&amp;nbsp; researcher or educator who’s been blocked by expensive computing costs or limited access to high-performance resources.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/ai-alliance-x-national-ai-research-resource-pilot-" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/ai-alliance-photo-block-9_16x9.avif" alt="Three people in an office setting, discussing ideas" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;The AI Alliance is proud to announce our participation in the National AI Research Resource (NAIRR) Pilot Deep Partnership program. This matters for you if you’re an AI&amp;nbsp; researcher or educator who’s been blocked by expensive computing costs or limited access to high-performance resources.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=44725920&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fthealliance.ai%2Fblog%2Fai-alliance-x-national-ai-research-resource-pilot-&amp;amp;bu=https%253A%252F%252Fthealliance.ai%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>News</category>
      <category>Insights</category>
      <pubDate>Thu, 18 Sep 2025 01:15:00 GMT</pubDate>
      <guid>https://thealliance.ai/blog/ai-alliance-x-national-ai-research-resource-pilot-</guid>
      <dc:date>2025-09-18T01:15:00Z</dc:date>
      <dc:creator>Admin</dc:creator>
    </item>
    <item>
      <title>How Can We Test Enterprise AI Applications? | Blog</title>
      <link>https://thealliance.ai/blog/how-can-we-test-enterprise-ai-applications</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/how-can-we-test-enterprise-ai-applications" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/ai-alliance-photo-line-8.png" alt="Two people looking at laptop" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;Overview&lt;/h2&gt; 
&lt;p&gt;When the AI Alliance started, I became co-leader of the&amp;nbsp;&lt;a href="https://thealliance.ai/focus-areas/trust-and-safety"&gt;&lt;span&gt;Trust and Safety Focus Area&lt;/span&gt;&lt;/a&gt;, because it was clear that without the ability to trust AI, it would not be widely adopted into enterprise and consumer applications. We have made a lot of progress since then, but another, related blocking issue became apparent to me about a year ago.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://thealliance.ai/blog/how-can-we-test-enterprise-ai-applications" title="" class="hs-featured-image-link"&gt; &lt;img src="https://thealliance.ai/hubfs/AI_Alliance_March2026/images/ai-alliance-photo-line-8.png" alt="Two people looking at laptop" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;h2&gt;Overview&lt;/h2&gt; 
&lt;p&gt;When the AI Alliance started, I became co-leader of the&amp;nbsp;&lt;a href="https://thealliance.ai/focus-areas/trust-and-safety"&gt;&lt;span&gt;Trust and Safety Focus Area&lt;/span&gt;&lt;/a&gt;, because it was clear that without the ability to trust AI, it would not be widely adopted into enterprise and consumer applications. We have made a lot of progress since then, but another, related blocking issue became apparent to me about a year ago.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=44725920&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fthealliance.ai%2Fblog%2Fhow-can-we-test-enterprise-ai-applications&amp;amp;bu=https%253A%252F%252Fthealliance.ai%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Research</category>
      <category>Insights</category>
      <pubDate>Wed, 17 Sep 2025 01:30:00 GMT</pubDate>
      <guid>https://thealliance.ai/blog/how-can-we-test-enterprise-ai-applications</guid>
      <dc:date>2025-09-17T01:30:00Z</dc:date>
      <dc:creator>Admin</dc:creator>
    </item>
  </channel>
</rss>
