Ethical AI?

I am finally facing it: AI isn’t going away.

My kids played in the orchestra pit for the school musical last week, and one of the great laugh lines in the show (Big Fish) is a chant:

“What do we want?”
     “Nothing to change!”
“When do we want it?”
     “Forever!”

Color me guilty. I’ve had my head a bit in the sand regarding using AI. My recalcitrance isn’t just because I hate change, though. Thinkers and moralists whom I admire see zero positive outcomes in the use of AI. Authors Roxanne Gaye and Daniel Abraham, whose moral compasses I trust, will be the last holdouts.

I turned to my brother, who is a philosophy professor. How does the good Dr. Pavelich – ethics prof extraordinaire – wrestle with the ethics of AI? Turns out he doesn’t. “It’s 100% useless and evil, so no wrestling needed” (I’m paraphrasing).

I’m watching data center after data center pop up by my home in Central Washington. We have cheap-ish land, super-cheap hydropower, and the raging Columbia River as a cool water source. Microsoft says they’ll be good neighbors, but oof, the change is fast and real and right here. And even if they’re “water positive” by 2030, do we really need massive water heaters when we’re smack dab in the middle of a climate crisis?

But it’s here.

So, as a professional tech consultant, it’s time to figure out an approach I’m ok with.

One of the reasons I love working at Soliant is that I’m not alone in this. Steve Lane, our former President, gave a great talk at Engage 2025 which included a focus on the veracity of AI output. Jeremiah Small, our Principal Technology Strategist, wrote our internal Generative AI guidelines, which hugely influenced my thoughts below. Wim Decorte is a trusted advisor in the truest sense of the phrase, and processing this with him has been invaluable.

AI North Stars

In the hope they’re useful to you, here are our North Stars regarding the use of AI.

  • Utility
  • Security
  • Sustainability
  • Veracity
  • Transparency

UTILITY

First off, is this even helping anyone? Our work is generally focused and specific. The reason we build custom software is that our clients have particular needs – and none of those involve generating images of people with too many hands.

AI is qualified to be a mid-level research assistant. AI is not qualified to create art.

Using AI to predict breast cancer based on mammograms – I’m open to the idea that could be useful.

Karl Jreijiri put together a solution that combs published FileMaker resources to create a very specific chatbot. THAT is useful.

For FileMaker solutions, semantic search on pictures can be powerful, and we’re sure that more AI features are coming that can both assist developers in writing better code and users in unlocking value from their data. But right now, that utility is limited.

SECURITY

Will your data be feeding a public LLM ‘s training routines, and do you trust that your data is kept secure? Will you be able to count on the offerings and reliability of a public LLM when you add that as a dependency? Hosting an LLM locally means you won’t incur subscription costs, but more importantly, your data will remain in your control.

SUSTAINABILITY

AI uses a crap ton of computer power. A lot of energy, a lot of heat, a lot of water. Given the climate crisis, the benefits had better be significant if we’re going to add more fuel to the fire.

Does every query you have need to go to GPT-4? What’s the smallest footprint possible? I just heard about Small Language Models and am eager to learn more. These could be particularly useful for our clients who generally work in a specific domain and don’t need their AI to pretend it knows the whole world.

Another red flag: If you use AI to check the veracity of your first AI output, you’re doubling down on the environmental impact.

VERACITY

As Steve said in his Engage talk, hallucinations are a feature, not a bug. Gen AI WILL BE WRONG. It is a pleaser and will lie to you. Whatever your use case, your AI output will need verification. Ensure your plan includes fact-checking to the degree appropriate for the project.

TRANSPARENCY

Don’t lie. Don’t pass off AI work as your own. Be clear with each other/your client/Dr. Pavelich that you used AI as a jumping off point. Soliant’s NORTH North Star is that we are trusted advisors to our clients. Transparency is critical to that relationship.

We are already seeing some companies change their staff’s performance review process to include checks on whether they passed AI-generated work off as their own.

Moving Forward

So, IF all of the elements above are honored, and the good outweighs the bad…then I can get my head around AI implementations that feel true and proper.

Any north stars you think I’ve missed? How are you wrestling with this (or are you)? I’d love to hear your thoughts.

Resources

6 thoughts on “Ethical AI?”

  1. Hey Sara

    I think this is a very important conversation. I want to encourage it! A few things I would add.

    In my experience, AI is less a creator and more an assistant – or at least when my experience combines with its knowledge – the result is quite impressive.

    And with the deepest of respect – I do strongly disagree with the sentiment that using AI in code generation is somehow not “my own work”. As I reflected on this, if there were/are issue I wouldn’t blame AI. If I’m responsible – it’s my code. If I wouldn’t blame it, I would not attribute it.

    Since I see AI as a tool, I often think about the invention of the table saw on woodworkers. I find this helpful as I think through the implication of AI now and in/on the future. Today, when woodwork make a cabinet on a table saw there is no expectation that they stamp the cabinet with “made with a table saw”. Maybe when the tables first came on the scene there might have been a “cheating” sentiment where the byproducts of this tool are “not real furniture”. But that sentiment does not exist today.

    What I can stand beside – and I expect this is more your point – developer should not hide their use of the AI tool. Just as a cabinet maker might welcome prospects to their shop where the table saw is plain to see – I feel no need to hide that I use AI as a tool. I also feel no need to lead with it.

    Appreciate your thoughts

    Marcus

    1. Thanks for thinking it through, Marcus! Above all that’s the point.

      I appreciate the responsibility you’re taking for the code you generate, whether that’s with a tool or with your brain. The table saw analogy works — for now, because using GenAI for our code isn’t yet standard or assumed, I like erring on the side of overt transparency. Eventually I might be with you on the “transparent when asked” side (AKA a tour of the wood shop).

      What’s your QA process when you use AI-generated code? Are you doing a thorough review, as though of the work of a junior developer, or doing functional testing? Because this isn’t really an option yet for FileMaker it’s theoretical to me, but I anticipate a line-by-line review until I’m quite confident in the results. And given my trust issues, that might take me into retirement. 😉

      -Sara

  2. Brian Panhuyzen

    Sara, your piece closely aligns with my own position on AI. I have no doubt it will do incredible things for humanity – if it doesn’t destroy us. And I am distressed by some of my colleagues’ “let’s AI EVERYTHING” position, without considering the impact or potential consequences.

    One aspect about AI is not widely discussed, and that it is a gift to the lazy, and aren’t we all lazy sometimes? I spied a college student beside me at a café recently who was doing a Q&A assignment on networking; I could see him move to each question, start to type a few words, and almost every time, switch to Chat-GPT for an answer he would copy-and-paste. He was too impatient to think about the reply, and because there was an out, he pursued it. (I admit I’m much the same with Waze while driving; I have to actively resist using it to reach a destination via a known route, because it’s just easier, while it quells uncertainty.)

    I have other friends in the art world who are patently against AI anything, and I imagine they would have been leading the charge against Gutenberg’s dangerous invention. That position is not realistic, and it abandons a position in which we consider how we use it responsibility, rather than dismissing it entirely.

    Interesting times. Thanks for your timely piece.

    1. Brian, I appreciate your thoughts on this. Related to your observations of the college studetn: I was just speaking with friend who is an English prof, and she was near retirement anyway but is eager to get out of the biz, finding it too difficult to get students to engage with actually LEARNING English. It’ll be interesting to think through how this trend impacts those of us in hiring positions… Soliant already has a pretty hands-on, show-your-work hiring process, and I think that’ll get even more critical.

  3. Hi Sara
    I love this discussion, and really appreciate that you wrote the post.
    Since I host the Claris Talk AI podcast with Cris Ippolite, there may be some bias on my part in favor of AI, though I do try to keep an open mind. There are many negatives to AI, of course. I’m just trying to use only the positive ones in my life.

    You mention that use of Generative AI for text should always be attributed, but I’d ask whether you also attribute when use a spell checker, or used the auto-complete feature that finished a word you’re typing or suggest the next word as you text on your phone. I submit that these are forms of AI, and the only difference is scale. It’s impossible to draw a line where people agree on what is AI and what is auto-complete. Many find it helpful to think of AI as auto-complete on steroids. I like to think of it has having 1,000 interns. No one attributes use of spell check or grammar check. I’d go further to say use of AI to proof text – not only for grammar and spelling – but also to edit for clarity, tone, focus and length. When I do, I don’t feel the need to attribute. Nor would I if I asked my assistant to proof and edit (She is a native Greek speaker that writes perfect English) (If you move to a foreign country HIRE A LOCAL ASSISTANT)

    A few more cases: If I use AI to fix a complex FileMaker calc, expand it to include more variables, or even suggest ideas or write entire calculations or scripts, it’s still my prompt, and it’s still me pushing the save button, and I don’t attribute AI. It’s my work and I take responsibility, whatever the tool. Like you, I’m neither hiding it nor leading with it. But I am using it every day and I find it immensely useful for countless things from teaching FileMaker to learning Greek.

    AI Data centers do use a lot of natural resources, though we never really thought of that much until lately, though they have done so since what, CompuServe? (I think I opened my account in 1980 or 81) What’s the cure? How will we come up with better/safer/cheaper energy? The only way I see is AI — helping to cure the problems it’s causing.

    What would it take for you to change your mind about any of this – or your brother’s mind – or mine?

    Note: I didn’t use AI for any part of this response.

    1. Matt, hi!

      Glad you commented, and honestly just having the discussion is a step in the right direction in my mind, so thanks.

      While we’re confessing biases, you might recall my outside-of-work hobbies are gardening and knitting, so my Luddite tendencies certainly influence my gut reaction.

      When I wrote about attributing the use of AI, I was thinking of being transparent with clients if you’re using AI for code generation. Week by week accepted practices may change, but given Soliant’s emphasis on being trusted advisors, I’ll err on the side of extra transparency. (I see your point re: spell-check and auto-complete, but it’s either a straw man argument or I didn’t make my point clearly — very possible!)

      This wide grey band of “what’s acceptable” is going to vary from person to person. I appreciate any pausing to think “What am I OK with?”

      The conversations with clients have been interesting. Some are on-board with AI already so if we ask permission to use AI to transcribe and summarize a call, for example, it’s no biggie. For others it is a hard “no.” (The privacy element of the equation is more prominent here.) Given that our code is also work for hire, I want our clients to be on-board with our choices. I’m with you on one point entirely, though: if we use AI to generate code, the code is still our responsibility.

      As for the data centers… I see this in the same vein as shopping for local, in-season produce. (Stay with me.) I can live with having strawberries just when they’re available from CA, OR or WA. I don’t need to pay for the fossil fuels that bring strawberries from S. America to my Safeway in December. But I will gladly bury my reservations when it comes to buying chocolate and coffee year-round. Similarly, there are frivolous uses of AI that I don’t judge as worth the environmental impact. And then there are those (finding better/safer/cheaper energy, scanning mammograms) that are worth it.

      What will it take to change my mind? I’d love to see more of those positive-for-humanity use cases. I don’t think 1,000 fake assistants plus your very competent real assistant could change my brother’s mind. 🙂

Leave a Comment

Your email address will not be published. Required fields are marked *

GET OUR INSIGHTS DELIVERED

Scroll to Top