Substantiated findings
LLM’s are great at organizing narratives and findings. It's helpful to see the sources that support these conclusions, making it easier to understand the analysis and where it comes from.


When reviewing findings, I want to see the supporting sources, so I can understand and trust the conclusions more easily, and see how ideas are substantiated.


- Comparing Similarity in Source Materials: Seeing a collection of similar information helps you compare similarity between the source materials.
- Building Trust through Access to Original Sources: Access to original sources builds trust in the findings, as users can review and understand the basis of the conclusions.

More of the Witlist

AI actions often take time to complete. To improve user experience, use descriptions of what is happening combined with basic animations that represent different types of actions.

Starting with a blank canvas can be intimidating, but providing prompt starters can help individuals overcome this initial hurdle and jumpstart their creativity.

Using the source input as ground truth will help trust the system and makes it easy to interpret its process and what might have gone wrong.

Presenting multiple outputs helps users explore and identify their preferences and provides valuable insights into their choices, even enabling user feedback for model improvement.

Textual information often misses intuitive cues for understanding relationships between ideas. AI can clarify these connections, making complex information easier to grasp quickly.

Generating multiple outputs and iteratively using selected ones as new inputs helps people uncover ideas and solutions, even without clear direction.