Utah Adopts Rule 707: Machine-Generated Evidence Now Subject to Gatekeeping Under Rule 702 (Effective March 6, 2026)

TL;DR:

  • Utah becomes one of the first states to enact a dedicated evidentiary standard for machine-generated evidence, adopting Utah Rule of Evidence 707 (Machine-Generated Evidence), effective March 6, 2026.
  • The rule imposes Rule 702-like reliability requirements on outputs produced by machine-based systems when offered without an expert witness, with simple instruments exempted.
  • Practitioners must plan for closer scrutiny of AI-driven outputs, prompt-source disclosure, and robust pretrial preparation to preserve admissibility and enable effective cross-examination.
  • The Utah rule aligns with a broader national trend toward tighter gatekeeping for AI outputs, even as the federal process for Proposed FRE 707 continues to unfold.
  • Practical steps now include data provenance, method reliability, and a clear strategy for when to introduce machine-generated evidence with or without expert support.

Utah’s March 6 2026 milestone

On March 6, 2026, Utah’s judiciary finalized Rule 707, adding a formal framework for machine-generated evidence in Utah courts. The new rule defines “machine-generated evidence” as information produced by a machine-based system that autonomously processes data to generate inferences, predictions, classifications, or conclusions, and it creates a gatekeeping regime parallel to the reliability standards that govern expert testimony. The rule also clarifies that outputs from simple scientific instruments remain outside its scope, and it specifies how machine-generated evidence may be admitted when offered without an expert witness, either entered directly or accompanied by lay testimony. The effective date is March 6, 2026. This adoption marks a concrete, immediately actionable change in trial practice within Utah state courts. (legacy.utcourts.gov)

What Rule 707 does in practical terms

  • Gatekeeping model modeled on Rule 702: Utah Rule 707 requires that when machine-generated evidence would be governed by Rule 702 if presented by a human expert, the court may admit it only if the output satisfies four core criteria: it will help the trier of fact, is based on sufficient facts or data, is the product of reliable principles and methods, and reflects a reliable application of those principles to the facts of the case. In short, AI outputs face an expert-like reliability screen even when no human expert testifies. (legacy.utcourts.gov)
  • Narrowly tailored to machine-generated outputs: The rule distinguishes between machine-generated inferences and outputs of simple scientific instruments, ensuring that routine measurements and instrument readings remain outside the Rule 707 framework. This helps avoid broad, unintended coverage of ordinary evidentiary sources. (legacy.utcourts.gov)
  • Direct entry or lay testimony: Machine-generated evidence may be admitted if entered directly or if accompanied by lay testimony. This creates two avenues for introduction but still mandates the reliability gatekeeping when no expert testifies. (legacy.utcourts.gov)
  • Effect on trial strategy: For trial teams, this means that AI-driven outputs—ranging from predictive models and risk scores to certain geolocation inferences or classification results—will be scrutinized with the same rigor as expert opinions when offered without an expert. Counsel should prepare to defend the data sources, data handling, model assumptions, and the chain of custody behind the outputs. (legacy.utcourts.gov)

Implications for case strategy and trial readiness

  • Pretrial data governance becomes critical: Practitioners should develop a robust data provenance plan. Who produced the machine outputs, what data fed the model, what version of the algorithm was used, and what prompts or parameters shaped the result all matter under Rule 707. Expect opposing counsel to demand disclosure of data sources, model structure, and any training data that could affect reliability. The reliability standard demands transparency about how the output was generated, not just the final result. (legacy.utcourts.gov)
  • Cross-examination and witness preparation: Even when machine-generated evidence is admitted with lay testimony, attorneys should be prepared to challenge the reliability of the underlying methods, including the data quality, algorithmic bias, and potential black-box concerns. Where appropriate, engineers or data scientists can be consulted to explain why the methods meet the Rule 707 criteria, mirroring the traditional cross-examination of expert testimony. (legacy.utcourts.gov)
  • Case types most affected: Military, healthcare, finance, policing, and consumer data cases frequently rely on AI-driven outputs or algorithmic inferences. In Utah courts, practitioners will now need to consider whether such outputs are admissible without an expert and, if admitted, whether the output meets each prong of 707’s reliability test. Simple instrument readings remain outside the rule, which is a helpful boundary for many technical evidences. (legacy.utcourts.gov)
  • Coordination with discovery and preservation: Because the admissibility hinges on reliable data and methods, document preservation and discovery around the data inputs, system configuration, and post-processing steps become essential. Counsel should seek to obtain the relevant data logs, prompts, and model metadata as part of pretrial disclosures where the outputs might become evidence. This is not only about admissibility but also about preventing later disputes that can stall trials. (legacy.utcourts.gov)

Practical steps for Utah trial teams now

  • Build a 707-ready evidentiary appendix: Create a ready-to-use dossier for every machine-generated output you plan to introduce. Include a description of the machine, the data inputs, data quality checks, the principles and methods used, and how the final conclusion was applied to the facts. This preemptive work mirrors the preparation typical for expert reports under Rule 702. (legacy.utcourts.gov)
  • Engage early when AI outputs are involved: If the case hinges on or substantially relies upon a machine-generated result, consider early engagement with a data scientist or an expert who can articulate how the reliability standards are satisfied. This does not require an expert witness at trial, but an expert can help frame the reliability argument for the court and opposing counsel. (legacy.utcourts.gov)
  • Draft targeted objections and responses: Prepare objections under Rule 707 for arguments that the machine-generated evidence is unreliable, incomplete, or based on insufficient facts. Have responses ready about why the data inputs meet the “sufficient data” standard and why the methods are reliably applied to the facts. (legacy.utcourts.gov)
  • Consider the boundary with simple instruments: In cases involving standard sensor readings or other direct measurements, verify whether the evidence truly falls outside 707 as a “simple scientific instrument.” This boundary is essential to avoid unnecessary gatekeeping and to preserve admissibility of routine data. (legacy.utcourts.gov)

Interplay with the broader evidentiary landscape

  • A federal track alongside state adoption: The federal system continues to consider Proposed Federal Rule of Evidence 707, which would apply Rule 702 standards to machine-generated evidence offered without an expert. Public commentary on FRE 707 closed in February 2026, signaling ongoing national deliberation even as Utah implements its own version. This coincidence invites practitioners to monitor developments in both spheres, as parallel trends could influence practice nationwide. (justice.org)
  • Practical impact beyond Utah: While the Utah amendment is state-specific, it reflects a broader migration toward disciplined scrutiny of AI-driven evidence. Litigation teams should anticipate that more jurisdictions will adopt similar gatekeeping approaches or at least require heightened disclosure and testing for machine-generated outputs as part of trial-prep playbooks. Kept expectations about AI evidence should align with evolving standards, including ongoing federal discussions. (my.vanderbilt.edu)

Next steps for litigators

  • Audit existing AI-driven evidence in ongoing matters: If a case uses machine-generated outputs, review the data lineage, methods, and potential challenges to reliability under Rule 707. Plan for pretrial motions or evidentiary hearings that address the admissibility hurdle head-on. (legacy.utcourts.gov)
  • Update checklists and training: Trial teams should incorporate 707-ready protocols into checklists, including data sourcing, model governance, and expert collaboration when appropriate. Consider internal or external training to ensure trial teams can articulate the reliability standards to judges and juries alike. (legacy.utcourts.gov)
  • Stay informed on federal developments: Maintain awareness of the ongoing FRE 707 process at the federal level and monitor state-by-state adoptions that may appear in the coming months. The intersection of state and federal rules will shape best practices for AI in the courtroom across jurisdictions. (justice.org)

Sources

Note: This timely development highlights how state-level rules can directly affect day-to-day trial readiness and evidentiary strategy. Practitioners should treat 707-adoption as a concrete cue to elevate data governance, method disclosure, and cross-examination planning when AI-driven outputs enter evidence at trial.