Just wanted to provide some quick feedback after attempting to use the DoD’s GEN AI I figured I’d share my initial thoughts.
Right now, the only LLM implemented is Gemini the other 3 (Grok, ChatGPT, Claude aren’t accessible yet). Overall, disappointment but I didn’t have high hopes anyways but I figured I’d see if I could use it in several case uses.
First, the military probably has the largest corpus of data to pull from considering all our published instructions and manuals. This was my first attempt. I figured I should be able to get Gemini to format a standard naval letter, considering there is plenty of information to pull from that is presented in plain text for interpretation. Nope, in fact the amount of time it took me to generate a simple message in official formatting with multiple prompts attempting to correct it took me well beyond the timespan of just creating one from scratch. I gave up, as if never could get it correct.
Second, I knew this was a long shot but I had a “brag sheet” for recognizing someone for their service during the quarter, as we call them “Sailor, Soldier, NCO etc of the Quarter”. Packages have a standard format, the brag sheet has the relevant data points to input….in theory it should have just plugged and played what was needed. Instead, it was a repeat of the above.
Last attempt at a real world case use, operational planning. This was a very simple test. I grabbed grid coordinates for a known area that could support one of our rotary wing platforms (helicopter) for a landing zone and promoted “would these grid coordinates support CV-22 land zone”? What I got was generic information about “theorized” capability but no recommendation because it couldn’t assess the area. While we have individuals that specifically do this analysis, I’ve had to do this site assessment previously in my career for off hand work for on the fly Medevac LZs on public plan to support training cycles. Not a big ask, but if it can’t do this basic task I doubt it can do anything very complex.
In essence, the only useful case use I’ve found is rewording citations or write ups for performance reports to include the most up to date buzz words leadership has set forth in their mission/vision statements. Basically, just adding fluff to nothing.
Thus far, it’s just away for the DoD to use AI models on government computers that were previously blocked. Sure, I can use it for generic open source analysis and comparisons but the majority of the time I can do that from my phone as well. This just streamlines information transfer, nothing really ground breaking.
However, seeing the advances in Claude with Excel I’m interested to see how it performs once it comes online for the DoD.
Your most recent discussion on AI triggered email, figured I’d send off just a quick snap shot.
Thank you for all you do,
Not Sent from my iPhone