I Asked My AI Assistant to Print a Transaction Form
This morning I needed to print a transaction form for my cash ledger.
The form already existed as a small HTML file in my ledger repository:
~/Personal/ledger/2026/cash/transaction-form.html
Normally this is the kind of tiny admin task that eats attention.
Open the folder. Open the file in a browser. Print. Realize the wrong queue is selected. Try again. Maybe the printer is disabled. Maybe the job is stuck. Maybe the page scaling is wrong. None of this is hard, but all of it is friction.
So I asked my AI assistant:
open transaction-form.html from ledger repo, it’s in 2026/cash folder. open in a browser to render as PDF and print this PDF
And it did the boring work.
The flow
The agent started a temporary local web server from the cash ledger folder:
cd ~/Personal/ledger/2026/cash
python3 -m http.server 8765 --bind 127.0.0.1
Then it rendered the HTML page into a PDF with headless Chrome:
google-chrome-stable \
--headless \
--no-sandbox \
--disable-gpu \
--print-to-pdf=/tmp/transaction-form-20260514-085744.pdf \
http://127.0.0.1:8765/transaction-form.html
Then it checked the available printer queues:
lpstat -p -d
There were two Xerox queues. One was disabled. The Bonjour queue was enabled:
So the agent sent the generated PDF there:
lp -d '[email protected]' \
/tmp/transaction-form-20260514-085744.pdf
The first job failed.
This is where the agent became useful in a very mundane, very real way. I just said:
retry
It checked the queue, cancelled the failed job, and sent the PDF again with page fitting enabled:
lp -d '[email protected]' \
-o fit-to-page \
/tmp/transaction-form-20260514-085744.pdf
This time CUPS reported:
Printer is printing
Done.
The important part was not printing
Printing one PDF is not impressive.
What matters is the loop:
- I gave the agent the intent, not step-by-step instructions.
- It found the file in the right repository.
- It rendered HTML to PDF instead of trying to print raw HTML.
- It inspected the printer state.
- It handled a failed print job.
- It retried with the correct option.
- It stopped the temporary server afterwards.
This is exactly where personal agents shine.
Not in replacing deep work. Not in generating huge documents. In removing the boring connective tissue between tools.
The shell. The browser. The printer queue. The local repo. The remembered preference that the form should be PDF first.
Turning the fix into memory
After it worked, I asked:
extract into a skill
The agent updated my existing ledger-cash-capture skill with the reusable procedure:
- use
2026/cash/transaction-form.html - render HTML to PDF with headless Chrome
- check printers with
lpstat - prefer the enabled Bonjour Xerox queue
- print with
lp - retry failed jobs with
-o fit-to-page - stop the temporary HTTP server afterwards
Then I added a Russian trigger too:
напечатай форму для транзакций
So next time I do not need to remember any of this. I can just ask for the thing I want.
This is why skills matter
A good agent should not only solve the task. It should get better after solving it.
The first time, it explores.
The second time, it follows a known path.
That is the difference between a chatbot and a useful personal automation system. The value is not only in the model. It is in the accumulated procedural memory around your actual life.
Tiny example. Real workflow. Printed paper in hand.
That is the kind of AI automation I like.