The Optimization#
The AI coding assistant does not have a name. The interface calls it “Assistant.” Jeff calls it nothing. He types prompts into the chat window and it returns code. The code works. That is the transaction.
Jeff has been using it for three months. The first prompts were simple. Write a Python script to parse RTSP camera feeds and save frames at 2fps. The assistant returned thirty-eight lines of code. Jeff ran it. It worked. He asked for refinements. Add motion detection using OpenCV. The assistant returned an updated script. It worked. Add timestamp logging and output to JSON. It worked.
The prompts became more specific. Flag frames where the same vehicle appears in multiple camera zones within a 10-minute window. Identify pedestrians who stop for more than 30 seconds in a commercial loading zone. Surface license plates that appear more than three times per week in non-residential parking areas. The assistant returned code. The code worked.
Jeff did not write the surveillance system. The assistant wrote the surveillance system. Jeff wrote the requirements. That is also a kind of engineering.
Last week, Jeff typed a new prompt.
Modify the anomaly detection module to surface any pattern that deviates from baseline behavior. Define baseline as the observed norm for each entity (vehicle, pedestrian, location) over a rolling 30-day window. Flag deviations that exceed 1 standard deviation from baseline.
The assistant returned an updated script. Jeff reviewed it. The logic was sound. Standard deviation is a neutral metric. It measures difference, not wrongness. The distinction mattered, he suspected, more than the measurement. Jeff deployed it.
The system began flagging anomalies.
A woman who walks south on Jefferson Avenue every weekday at 7:15 AM did not walk on Tuesday. Flagged.
A vehicle that parks in the Marshall Street garage, Zone B, Row 3, every Monday through Thursday parked in Zone C, Row 7, on Wednesday. Flagged.
A building on Maple Street that typically shows interior lights from 6:00 AM to 11:00 PM showed lights until 1:14 AM on Friday. Flagged.
A pedestrian who usually enters the library through the north entrance used the south entrance on Saturday. Flagged.
A dog walker who completes a 1.2-mile loop in thirty-two minutes completed it in forty-seven minutes on Sunday. Flagged.
By Monday morning, there were sixty-three new flags.
Jeff reviews them. He opens each one. He pulls the footage. He watches.
The woman who did not walk on Tuesday: Jeff scrubs through the day’s footage. She does not appear. He checks the adjacent cameras. No sign of her. He cross-references the previous week. She walked Monday, Tuesday, Wednesday, Thursday, Friday. The pattern held for four weeks. Why did she skip this Tuesday?
Jeff adds her to the watch list.
The vehicle in the wrong parking zone: Jeff pulls the driver’s profile. The vehicle is registered to a man named Aaron Voss. Aaron works in the building adjacent to the Marshall Street garage. Aaron has parked in Zone B for six months. Why did Aaron park in Zone C this Wednesday?
Jeff adds Aaron to the watch list.
The building with late lights: Jeff checks the property records. The building is a small accounting office. Four employees. Business hours are 8:00 AM to 6:00 PM. Why were the lights on until 1:14 AM?
Jeff adds the building to the watch list.
He works through the list. Each flag requires investigation. Each investigation generates questions. The questions do not have answers yet, but that is normal. Patterns reveal themselves over time. The flags are leads. Leads become evidence, or so the logic went.
By Wednesday, there are ninety-one new flags.
A traffic light on Broadway that typically cycles every ninety seconds cycled every seventy-eight seconds on Tuesday afternoon. Flagged.
A mail carrier who delivers to the same block every weekday between 10:00 AM and 11:30 AM delivered between 2:00 PM and 3:15 PM on Thursday. Flagged.
A man who exits the Starbucks on El Camino every morning carrying a venti cup exited carrying a grande cup on Monday. Flagged.
Jeff reviews the traffic light. He pulls the city’s traffic management data. There is no maintenance log for Tuesday. No scheduled adjustments. The cycle time changed. Why?
He adds the intersection to the watch list.
He reviews the mail carrier. He pulls the route data from USPS. There is no public access, but he finds a PDF of the delivery zone map on a community forum. The carrier’s route is consistent with the 10:00 AM to 11:30 AM window. The Thursday deviation is unexplained.
He adds the carrier to the watch list.
He reviews the man with the wrong coffee size. He pulls three weeks of footage. The man orders a venti every weekday. Once, eighteen days ago, he ordered a grande. Otherwise, venti. The deviation rate is low, but it exists.
Jeff does not add the man to the watch list. This flag is noise.
By Friday, there are one hundred and thirty-four new flags.
Jeff stops reviewing them individually. There are not enough hours in the day. He writes a new prompt.
Prioritize flagged anomalies based on recurrence frequency and deviation magnitude. Rank results so the most significant deviations appear first.
The assistant returns an updated script. Jeff deploys it.
The system begins ranking the flags. The woman who skipped her walk appears at the top. She has now skipped three days. The deviation magnitude is high. Jeff opens her file. He has twelve clips of her walking. Same route. Same time. Same pace. Then nothing. Three consecutive absences.
Jeff searches her route for alternate cameras. He finds one that covers the adjacent street. He scrubs through the footage. On the first day she did not walk, a vehicle matching her profile (white sedan, 2010s model) is visible in a driveway two blocks from her usual route. The vehicle does not move all day.
On the second day, the vehicle is gone.
On the third day, the vehicle reappears.
Jeff adds a note to her file: Possible vehicle issue. Alternate transportation. Monitor for pattern return.
He moves to the next flag. A business on Main Street that receives deliveries every Tuesday and Thursday received a delivery on Saturday. Jeff pulls the footage. The delivery truck is unmarked. The driver unloads three boxes. The boxes are unlabeled. Jeff zooms in. The resolution is insufficient. He adds the business to the high-priority watch list.
He moves to the next flag. A pedestrian who typically walks with a slight limp did not limp on Wednesday. Jeff pulls the footage. The pedestrian’s gait is normal. Smooth. No hesitation. On Thursday, the limp returns.
Jeff adds a note: Gait inconsistency. Medical variance or alternate individual. Requires facial confirmation.
He works through the ranked list. By 11:00 PM, he has processed forty-two flags. Ninety-two remain.
He writes a new prompt.
Expand anomaly detection to include second-order deviations. Flag entities whose behavior changes in proximity to other flagged entities. Example: if Entity A is flagged for deviation, check if Entity B, who typically interacts with Entity A, also shows deviation during the same time window.
The assistant returns an updated script. Jeff reads the code. The logic is elegant. It maps relationships. If two entities deviate simultaneously, the system infers a connection. Correlation does not imply causation, but it implies something.
Jeff deploys the script.
By Saturday morning, there are two hundred and seven new flags.
Jeff opens the ranked list. The top flag is a cluster: four vehicles that typically park near each other in the Jefferson Avenue lot all parked in different locations on Friday. The system has inferred a relationship. The vehicles belong to employees of the same business. The business is a small engineering consultancy. The consultancy is not on Jeff’s target list, but the deviation is striking. Four vehicles. Same day. Independent deviations that align.
Jeff opens the footage. He watches each vehicle arrive. Different times. Different entrances. No obvious coordination. But the pattern is there.
He adds the consultancy to the watch list.
He moves to the next cluster. A pedestrian who usually buys coffee at 8:00 AM bought coffee at 8:45 AM on the same day a different pedestrian—who usually buys coffee at 8:45 AM—bought coffee at 8:00 AM. The system has flagged this as a temporal swap. Two entities exchanging positions in the timeline.
Jeff pulls the footage. The pedestrians do not interact. They do not acknowledge each other. They simply occupy each other’s slots.
Jeff stares at the screen.
The system is working. It is surfacing anomalies. The anomalies are real. The coffee times did swap. The vehicles did park in different locations. The woman did skip her walk. These are facts.
Jeff does not know what the facts mean.
He opens the assistant chat window. He types a new prompt.
How many flags can a single human analyst process per day?
The assistant replies immediately.
Processing capacity depends on the complexity of each flag and the depth of analysis required. For anomaly review involving video footage and cross-referencing, a reasonable estimate is 20–40 flags per 8-hour workday. This estimate assumes each flag requires 10–20 minutes of investigation.
Jeff types: Current flag generation rate is 30–50 flags per day. This rate is increasing as the baseline dataset expands. What is the recommended approach?
The assistant replies: Consider implementing automated filtering to reduce false positives, or prioritize high-confidence flags using a scoring algorithm. Alternatively, expand analyst capacity by delegating review tasks to additional personnel.
Jeff closes the chat window.
He looks at the ranked list. Two hundred and seven flags. He can process forty per day. The system generates fifty per day. The gap widens daily.
He thinks about ClearSightCA. ClearSightCA could help. ClearSightCA is fast.
He thinks about the woman who skipped her walk. He thinks about the vehicles that parked in the wrong zones. He thinks about the man with the wrong coffee size.
He thinks about the system surfacing everything that is different.
He thinks about the fact that different is not the same as wrong.
He opens the assistant chat window. He types a new prompt.
Modify the ranking algorithm to weight flags where the deviation has recurred at least twice. Deprioritize one-time deviations unless deviation magnitude exceeds 2 standard deviations.
The assistant returns an updated script.
Jeff deploys it.
The flag count drops to one hundred and forty-three.
It is still too many.
Jeff closes the laptop. He sits in the dark. Outside, the cameras are recording. The system is running. The baseline is expanding. Tomorrow, there will be more flags.
He does not scale back.
He will prioritize.