I am finally facing it: AI isn’t going away.
My kids played in the orchestra pit for the school musical last week, and one of the great laugh lines in the show (Big Fish) is a chant:
“What do we want?”
“Nothing to change!”
“When do we want it?”
“Forever!”
Color me guilty. I’ve had my head a bit in the sand regarding using AI. My recalcitrance isn’t just because I hate change, though. Thinkers and moralists who I admire see zero positive outcomes in the use of AI. Authors Roxanne Gaye and Daniel Abraham, whose moral compasses I trust, will be the last holdouts.
I turned to my brother, who is a philosophy professor. How does the good Dr. Pavelich – ethics prof extraordinaire – wrestle with the ethics of AI? Turns out he doesn’t. “It’s 100% useless and evil, so no wrestling needed” (I’m paraphrasing).
I’m watching data center after data center pop up by my home in Central Washington. We have cheap-ish land, super-cheap hydropower, and the raging Columbia River as a cool water source. Microsoft says they’ll be good neighbors, but oof, the change is fast and real and right here. And even if they’re “water positive” by 2030, do we really need massive water heaters when we’re smack dab in the middle of a climate crisis?
But it’s here.
So, as a professional tech consultant, it’s time to figure out an approach I’m ok with.
One of the reasons I love working at Soliant is that I’m not alone in this. Steve Lane, our former President, gave a great talk at Engage 2025 which included a focus on the veracity of AI output. Jeremiah Small, our Principal Technology Strategist, wrote our internal Generative AI guidelines, which hugely influenced my thoughts below. Wim Decorte is a trusted advisor in the truest sense of the phrase, and processing this with him has been invaluable.
AI North Stars
In the hope they’re useful to you, here are our North Stars regarding the use of AI.
- Utility
- Security
- Sustainability
- Veracity
- Transparency
UTILITY
First off, is this even helping anyone? Our work is generally focused and specific. The reason we build custom software is that our clients have particular needs – and none of those involve generating images of people with too many hands.
AI is qualified to be a mid-level research assistant. AI is not qualified to create art.
Using AI to predict breast cancer based on mammograms – I’m open to the idea that could be useful.
Karl Jreijiri put together a solution that combs published FileMaker resources to create a very specific chatbot. THAT is useful.
For FileMaker solutions, semantic search on pictures can be powerful, and we’re sure that more AI features are coming that can both assist developers in writing better code and users in unlocking value from their data. But right now, that utility is limited.
SECURITY
Will your data be feeding a public LLM ‘s training routines, and do you trust that your data is kept secure? Will you be able to count on the offerings and reliability of a public LLM when you add that as a dependency? Hosting an LLM locally means you won’t incur subscription costs, but more importantly, your data will remain in your control.
SUSTAINABILITY
AI uses a crap ton of computer power. A lot of energy, a lot of heat, a lot of water. Given the climate crisis, the benefits had better be significant if we’re going to add more fuel to the fire.
Does every query you have need to go to GPT-4? What’s the smallest footprint possible? I just heard about Small Language Models and am eager to learn more. These could be particularly useful for our clients who generally work in a specific domain and don’t need their AI to pretend it knows the whole world.
Another red flag: If you use AI to check the veracity of your first AI output, you’re doubling down on the environmental impact.
VERACITY
As Steve said in his Engage talk, hallucinations are a feature, not a bug. Gen AI WILL BE WRONG. It is a pleaser and will lie to you. Whatever your use case, your AI output will need verification. Ensure your plan includes fact-checking to the degree appropriate for the project.
TRANSPARENCY
Don’t lie. Don’t pass off AI work as your own. Be clear with each other/your client/Dr. Pavelich that you used AI as a jumping off point. Soliant’s NORTH North Star is that we are trusted advisors to our clients. Transparency is critical to that relationship.
We are already seeing some companies change their staff’s performance review process to include checks on whether they passed AI-generated work off as their own.
Moving Forward
So, IF all of the elements above are honored, and the good outweighs the bad…then I can get my head around AI implementations that feel true and proper.
Any north stars you think I’ve missed? How are you wrestling with this (or are you)? I’d love to hear your thoughts.
Resources
- https://e360.yale.edu/features/artificial-intelligence-climate-energy-emissions
- https://www.integrityenergy.com/blog/the-shocking-truth-of-ai-energy-consumption/
- https://www.npr.org/2024/07/10/nx-s1-5028558/artificial-intelligences-thirst-for-electricity
- https://local.microsoft.com/blog/understanding-microsoft-datacenters-in-central-washington/
- https://bsky.app/profile/roxanegay.bsky.social/post/3lnl44usnus2m
- https://www.nytimes.com/2024/03/16/business/work-friend-roxane-gay.html
- https://www.wired.com/story/why-researchers-are-turning-to-small-language-models/
- https://huggingface.co/blog/jjokah/small-language-model