Ethical AI?

I am finally facing it: AI isn’t going away.

My kids played in the orchestra pit for the school musical last week, and one of the great laugh lines in the show (Big Fish) is a chant:

“What do we want?”
     “Nothing to change!”
“When do we want it?”
     “Forever!”

Color me guilty. I’ve had my head a bit in the sand regarding using AI. My recalcitrance isn’t just because I hate change, though. Thinkers and moralists whom I admire see zero positive outcomes in the use of AI. Authors Roxanne Gaye and Daniel Abraham, whose moral compasses I trust, will be the last holdouts.

I turned to my brother, who is a philosophy professor. How does the good Dr. Pavelich – ethics prof extraordinaire – wrestle with the ethics of AI? Turns out he doesn’t. “It’s 100% useless and evil, so no wrestling needed” (I’m paraphrasing).

I’m watching data center after data center pop up by my home in Central Washington. We have cheap-ish land, super-cheap hydropower, and the raging Columbia River as a cool water source. Microsoft says they’ll be good neighbors, but oof, the change is fast and real and right here. And even if they’re “water positive” by 2030, do we really need massive water heaters when we’re smack dab in the middle of a climate crisis?

But it’s here.

So, as a professional tech consultant, it’s time to figure out an approach I’m ok with.

One of the reasons I love working at Soliant is that I’m not alone in this. Steve Lane, our former President, gave a great talk at Engage 2025 which included a focus on the veracity of AI output. Jeremiah Small, our Principal Technology Strategist, wrote our internal Generative AI guidelines, which hugely influenced my thoughts below. Wim Decorte is a trusted advisor in the truest sense of the phrase, and processing this with him has been invaluable.

AI North Stars

In the hope they’re useful to you, here are our North Stars regarding the use of AI.

  • Utility
  • Security
  • Sustainability
  • Veracity
  • Transparency

UTILITY

First off, is this even helping anyone? Our work is generally focused and specific. The reason we build custom software is that our clients have particular needs – and none of those involve generating images of people with too many hands.

AI is qualified to be a mid-level research assistant. AI is not qualified to create art.

Using AI to predict breast cancer based on mammograms – I’m open to the idea that could be useful.

Karl Jreijiri put together a solution that combs published FileMaker resources to create a very specific chatbot. THAT is useful.

For FileMaker solutions, semantic search on pictures can be powerful, and we’re sure that more AI features are coming that can both assist developers in writing better code and users in unlocking value from their data. But right now, that utility is limited.

SECURITY

Will your data be feeding a public LLM ‘s training routines, and do you trust that your data is kept secure? Will you be able to count on the offerings and reliability of a public LLM when you add that as a dependency? Hosting an LLM locally means you won’t incur subscription costs, but more importantly, your data will remain in your control.

SUSTAINABILITY

AI uses a crap ton of computer power. A lot of energy, a lot of heat, a lot of water. Given the climate crisis, the benefits had better be significant if we’re going to add more fuel to the fire.

Does every query you have need to go to GPT-4? What’s the smallest footprint possible? I just heard about Small Language Models and am eager to learn more. These could be particularly useful for our clients who generally work in a specific domain and don’t need their AI to pretend it knows the whole world.

Another red flag: If you use AI to check the veracity of your first AI output, you’re doubling down on the environmental impact.

VERACITY

As Steve said in his Engage talk, hallucinations are a feature, not a bug. Gen AI WILL BE WRONG. It is a pleaser and will lie to you. Whatever your use case, your AI output will need verification. Ensure your plan includes fact-checking to the degree appropriate for the project.

TRANSPARENCY

Don’t lie. Don’t pass off AI work as your own. Be clear with each other/your client/Dr. Pavelich that you used AI as a jumping off point. Soliant’s NORTH North Star is that we are trusted advisors to our clients. Transparency is critical to that relationship.

We are already seeing some companies change their staff’s performance review process to include checks on whether they passed AI-generated work off as their own.

Moving Forward

So, IF all of the elements above are honored, and the good outweighs the bad…then I can get my head around AI implementations that feel true and proper.

Any north stars you think I’ve missed? How are you wrestling with this (or are you)? I’d love to hear your thoughts.

Resources

2 thoughts on “Ethical AI?”

  1. Hey Sara

    I think this is a very important conversation. I want to encourage it! A few things I would add.

    In my experience, AI is less a creator and more an assistant – or at least when my experience combines with its knowledge – the result is quite impressive.

    And with the deepest of respect – I do strongly disagree with the sentiment that using AI in code generation is somehow not “my own work”. As I reflected on this, if there were/are issue I wouldn’t blame AI. If I’m responsible – it’s my code. If I wouldn’t blame it, I would not attribute it.

    Since I see AI as a tool, I often think about the invention of the table saw on woodworkers. I find this helpful as I think through the implication of AI now and in/on the future. Today, when woodwork make a cabinet on a table saw there is no expectation that they stamp the cabinet with “made with a table saw”. Maybe when the tables first came on the scene there might have been a “cheating” sentiment where the byproducts of this tool are “not real furniture”. But that sentiment does not exist today.

    What I can stand beside – and I expect this is more your point – developer should not hide their use of the AI tool. Just as a cabinet maker might welcome prospects to their shop where the table saw is plain to see – I feel no need to hide that I use AI as a tool. I also feel no need to lead with it.

    Appreciate your thoughts

    Marcus

    1. Thanks for thinking it through, Marcus! Above all that’s the point.

      I appreciate the responsibility you’re taking for the code you generate, whether that’s with a tool or with your brain. The table saw analogy works — for now, because using GenAI for our code isn’t yet standard or assumed, I like erring on the side of overt transparency. Eventually I might be with you on the “transparent when asked” side (AKA a tour of the wood shop).

      What’s your QA process when you use AI-generated code? Are you doing a thorough review, as though of the work of a junior developer, or doing functional testing? Because this isn’t really an option yet for FileMaker it’s theoretical to me, but I anticipate a line-by-line review until I’m quite confident in the results. And given my trust issues, that might take me into retirement. 😉

      -Sara

Leave a Comment

Your email address will not be published. Required fields are marked *

GET OUR INSIGHTS DELIVERED

Scroll to Top