• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Satisfice, Inc.

Software Testing for Serious People

Rapid Software Testing A Context-Driven Methodology
  • Home
  • About
    • Privacy Policy
  • Methodology
    • Exploratory Testing
    • Reasons to Repeat Tests
  • Consulting
  • Classes
    • James Bach’s Testing Challenge
    • Testimonials
    • RST Courses Offered
    • Testers and Automation: Avoiding the Traps
    • Rapid Software Testing Explored
    • Rapid Software Testing Applied
    • Rapid Software Testing Managed
    • Rapid Software Testing Coached
    • Rapid Software Testing Focused: Risk
    • Rapid Software Testing Focused: Strategy
  • Schedule
  • Blog
  • Contact
  • Resources
    • Downloads
    • Bibliography: Exploratory Process
    • Bibliography: Risk Analysis
    • Bibliography: Coaching
    • Bibliography: Usability
    • Bibliography: My Stuff From IEEE Computer and IEEE Software Magazines
    • Bibliography: The Sociology of Harry Collins

WAIT#2 Call for Participation

Published: April 29, 2024 by James Bach Leave a Comment

The Workshop on AI in Testing (WAIT) #2

WAIT is a small, two-day, online, non-commercial, LAWST-style peer conference.

FacilitatorJon Bach
Content OwnerJames Bach
DatesJune 29-30 2024
Times7am-1pm PDT (16:00-22:00 UTC+2)
MediaZoom
AttendeesUp to 20

Who can attend?

We are looking for people with experience testing AI systems and/or applying AI to testing.

If you are such a person and you want to be invited, send an email to peerconference@satisfice.com. Summarize your experience and confirm that you are willing to give an experience report. We may accept people who are not offering an experience report, but we will favor those who have one to share.

More About the Theme of the Conference

Testers such as James Bach, Michael Bolton, Wayne Roseberry, Nate Custer, and Ben Simo have made experiments and done close analyses of public demos of purported uses of AI to make testing better or faster. For the most part, what they’ve seen is underwhelming. Some of it is laughably bad. Yet, claims continue to be made that AI can help testing. Companies that produce development and testing tools are apparently racing to put AI features into their products.

Is there anything about this trend that lives up to the hype? Or is it all just a big noise, signifying nothing? What the industry needs are sober testing professionals to evaluate these claims.

We’d like to hear experiences from anyone who has tried to use AI for real testing (this can include a realistically complex experiment) and evaluated the results, rather than merely trusting that the tool worked. We are not interested in AI fanboys demoing their latest reskinning of ChatGPT. If all you have is a flashy demo, you’ll get torn to pieces you’ll experience criticism.

The content owner will particularly seek the answer to these questions:

  • What does the AI claim to do?
  • Did the AI really do what it claimed to do?
  • Can it reliably do that under realistic conditions?
  • What special work must humans do to support it and supervise it?

What is a LAWST-style peer conference?

A LAWST-style peer conference has a facilitator, a content owner, a theme, and contributors. The facilitator stays out of the discussions. The content owner determines what is on-topic. The theme describes the topic, and each contributor comes ready to present an experience report and/or critically question the reports that are delivered. Peer conferences are the best way we know to share practical technical knowledge on a three to six pizza scale.

An experience report is not a product demo, nor a conceptual presentation about a putative best practice. An experience report is a situation in your professional life where you faced problems and tried to solve them. Whether or not you did solve them, you learned from that experience, and you want to share that learning with others. Experience reports do not require a slide show. You can just talk, or show screenshots or documents. In other words, we don’t need you to do a lot of preparation.

The format of the conference is that someone gives an experience report (typically 15-30 minutes) and then we move to “open season” where they are asked critical questions by their peers (including the content owner). The presenter responds to questions, comments, and concerns, until there are no more left to discuss. There is no set time-limit for open season. This means we don’t know in advance how many experience reports we will get through. (At the group’s discretion, we may decide to share some experience reports as “lightning talks” if time is running out.)

Will this meeting be recorded?

Although the meeting itself will not be recorded (to encourage frank discussion and debate), any participant will be free to publish their notes about what transpired, or to reshare any materials that were shared with the gathering.

The organizers will publish a summary of the proceedings.

Filed Under: AI and Testing

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Search

Categories

  • About Me (17)
  • Agile Methodology (14)
  • AI and Testing (4)
  • Automation (20)
  • Bug Investigation and Reporting (9)
  • Buggy Products (24)
  • Certification (10)
  • Context-Driven Testing (44)
  • Critique (46)
  • Ethics (22)
  • Exploratory Testing (34)
  • FAQ (5)
  • For Newbies (25)
  • Heuristics (28)
  • Important! (20)
  • Language (35)
  • Management (20)
  • Metrics (3)
  • Process Dynamics (27)
  • Quality (8)
  • Rapid Software Testing Methodology (23)
  • Risk Analysis (13)
  • RST (5)
  • Scientific Method (3)
  • Skills (30)
  • Test Coverage (8)
  • Test Documentation (8)
  • Test Oracles (5)
  • Test Reporting (11)
  • Test Strategy (26)
  • Testability (4)
  • Testing Culture (96)
  • Testing vs. Checking (18)
  • Uncategorized (12)
  • Working with Non-Testers (7)

Blog Archives

Footer

  • About James Bach
  • Satisfice Blog
  • Bibliography: Bach on IEEE
  • Contact James
  • Consulting
  • Privacy Policy
  • RST Courses
  • RST Explored
  • RST Applied
  • RST Managed
  • RST Coached
  • RST Focused: Risk
  • RST Focused: Strategy
  • RST Methodology
  • Exploratory Testing
  • Testing Training
  • Resources
  • Bibliography: Exploratory
  • Bibliography: Risk Analysis
  • Bibliography: Coaching
  • Bibliography: Usability
  • Bibliography: The Sociology of Harry Collins
  • Schedule
  • Upcoming Public Classes
  • Upcoming Online Classes
  • Public Events
  • Tester MeetUps

Copyright © 2025 · News Pro on Genesis Framework · WordPress · Log in