7-Eleven’s Knowledge Documentation Dilemma
7-Eleven’s information ecosystem is very large and sophisticated, housing 1000’s of tables with a whole lot of columns throughout our Databricks surroundings. This information kinds the spine of our operations, analytics and decision-making processes. Historically, 7-Eleven’s information dictionary and documentation lived in Confluence pages, meticulously maintained by our information crew members who would manually doc desk and column definitions.
We confronted a essential roadblock as we started exploring the AI-powered options on the Databricks Knowledge Intelligence Platform, together with AI/BI Genie, clever dashboards and different functions. These superior instruments rely closely on desk metadata and feedback embedded straight inside Databricks to generate insights, reply questions on our information, and construct automated visualizations. With out correct desk and column feedback in Databricks itself, we have been primarily leaving highly effective AI capabilities on the desk. For instance, when Genie lacks column definitions, it may well misread the that means of bespoke columns, requiring finish customers to make clear. As soon as we enriched our metadata, Genie’s contextual understanding improved dramatically—precisely figuring out column functions, surfacing the appropriate tables in response to pure language queries, and producing way more related and actionable insights. Merely put, Genie, like all AI brokers, will get extra considerate and extra useful when it has higher metadata to work with.
The hole between our well-documented Confluence pages and our “metadata-light” Databricks surroundings was stopping us from realizing the complete potential of our information platform funding.
Guide Migration’s Not possible Scale
After we initially thought of migrating our documentation from Confluence to Databricks, the dimensions of the problem grew to become instantly obvious. With 1000’s of tables containing a whole lot of columns every, a handbook migration would require:
- Time-intensive labor: A whole lot of person-hours to repeat and paste documentation
- Guide metadata updates: Crafting 1000’s of particular person SQL statements to replace metadata or going to every desk UI
- Mission oversight: Implementing a monitoring system to make sure all tables have been correctly up to date
- High quality assurance: Making a validation course of to catch inevitable human errors
- Ongoing repairs: Establishing an ongoing upkeep protocol to maintain each programs in sync
Human error could be unavoidable even when we devoted vital sources to this effort. Some tables could be missed, feedback could be incorrectly formatted, and the method would possible have to be repeated as documentation advanced. Furthermore, the tedious nature of the work possible results in inconsistent high quality throughout the documentation.
Most regarding was the chance price. Whereas our information crew centered on this migration, they couldn’t work on higher-value initiatives. Each day, we confronted delays in strengthening our Databricks metadata, leaving untapped potential within the AI/BI capabilities already at our fingertips.
The Clever Doc Processing Pipeline
To resolve this problem, 7-Eleven developed a classy agentic AI workflow powered by Llama 4 Maverick, deployed by way of Mosaic AI Mannequin Serving, that automated your complete documentation migration course of by way of an clever multistage pipeline:
- Discovery section: The agent makes use of Databricks APIs to get all tables, desk names and column constructions.
- Doc retrieval: The agent pulls all related information dictionary paperwork from Confluence, making a corpus of potential documentation sources.
- Reranking and filtering: Implementing superior reranking algorithms, the system prioritizes probably the most related documentation for every desk, filtering out noise and irrelevant content material. This essential step ensures we match tables with their correct documentation even when naming conventions aren’t completely constant.
- Clever matching: For every Databricks desk, the AI agent analyzes potential documentation matches, utilizing contextual understanding to find out the proper Confluence web page even when names don’t match precisely.
- Focused extraction: As soon as the proper documentation is recognized, the agent intelligently extracts related descriptions for each tables and their columns, preserving the unique that means whereas formatting appropriately for Databricks metadata.
- SQL era: The system routinely generates correctly formatted SQL statements to replace the Databricks desk and column feedback, dealing with particular characters and formatting necessities.
- Execution and verification: The agent runs the SQL updates and, by way of MLflow monitoring and analysis, verifies that metadata was utilized accurately, logs outcomes, and surfaces any points for human evaluation.
- Monitoring and insights: The crew additionally makes use of the AI/BI Genie Dashboard to trace venture metrics in actual time, making certain transparency, high quality management, and steady enchancment.
This clever pipeline reworked months of tedious, error-prone work into an automatic course of that accomplished the preliminary migration in days. The system’s means to grasp context and make clever matches between in another way named or structured sources was key to attaining excessive accuracy.
Since implementing this resolution, we plan emigrate documentation for over 90% of our tables, unlocking the complete potential of Databricks’ AI/BI options. What started as a evenly used AI assistant has advanced into an on a regular basis device in our information workflows.. Genie’s means to grasp context now mirrors how a human would interpret the info, due to the column-level metadata we injected. Our information scientists and analysts can now use pure language queries by way of AI/BI Genie to discover information, and our dashboards leverage the wealthy metadata to supply extra significant visualizations and insights.
The answer continues to supply worth as an ongoing synchronization device, making certain that as our documentation evolves in Confluence, these modifications are mirrored in our Databricks surroundings. This venture demonstrated how thoughtfully utilized AI brokers can remedy complicated information governance challenges at enterprise scale, turning what appeared like an insurmountable documentation activity into a sublime automated resolution.
Need to be taught extra about AI/BI and the way it will help unlock worth out of your information? Be taught extra right here.