Skip to content

Remove dead code: unused function and stale build dep#3496

Merged
cuihtlauac merged 1 commit intoocaml:mainfrom
bbatsov:remove-dead-code
Feb 11, 2026
Merged

Remove dead code: unused function and stale build dep#3496
cuihtlauac merged 1 commit intoocaml:mainfrom
bbatsov:remove-dead-code

Conversation

@bbatsov
Copy link
Contributor

@bbatsov bbatsov commented Feb 10, 2026

I noticed a couple of small bits of dead code while poking around the codebase:

  • Url.blog_post in src/global/url.ml — this function isn't called anywhere. local_blog is used, but blog_post appears to be a leftover.
  • planet-local-blogs source_tree dep in the root dune file — references data/planet-local-blogs/, which doesn't exist. Likely from an earlier iteration of the planet feed setup.

Neither removal affects any functionality. Just tidying things up.

@bbatsov bbatsov requested a review from avsm as a code owner February 10, 2026 15:03
@cuihtlauac
Copy link
Collaborator

cuihtlauac commented Feb 10, 2026

Thanks, @bbatsov, for all the great contributions. However, the current workflow is becoming a bottleneck. I kindly assume your recent PR are agentic-assisted, which is great. But the numbers aren't that great.

#3492 : 43 touched files (merged)
#3493 : 11 touched files
#3494 : 87 touched files
#3495 : 51 touched files
#3496 : 53 touched files

Total: 245 touched files. We need to enable agentic-assisted review. I suggest automating the creation of smaller, more targeted PRs. Maybe not one PR per touched file, but at least a unique and narrow kind of change per PR (e.g. a single markdown lint rule, not mere grammar, but rather: "fix its in it's", etc.). Also, tags should be added. I will create an "agentic-pr" tag, but we can have more granular too.

What do you think?

@bbatsov
Copy link
Contributor Author

bbatsov commented Feb 11, 2026

Thanks, @bbatsov, for all the great contributions. However, the current workflow is becoming a bottleneck. I kindly assume your recent PR are agentic-assisted, which is great. But the numbers aren't that great.

Yeah, you're right - the linting fixes are mostly done by applying markdown-cli's auto-fixing functionality, and the grammar fixes are mostly done by Claude. My approach was to apply the same fixes folder by folder for data and the root docs (to avoid overwhelming diffs), as this seemed to me granular enough. (and I go over the entire changelog manually) For this particular PR I see I got carried away and I opened it from the same branch as #3495, which I'll fix in a bit.

If you introduce some tag that should be applied when using AI agents I'd be happy to apply it when needed.

@bbatsov
Copy link
Contributor Author

bbatsov commented Feb 11, 2026

@cuihtlauac Going once again over the PRs, I think that #3493 is small enough in its current form, and for the others I can think a bit more about the optimum structure of the other 2 (that touch more files). I thought that given the simple nature of the changes the granularity did not matter that much (and I wasn't too keen on opening dozens of PRs). I had actually started to just fix the lint issues for the tutorials, but the help of Claude made me slightly more ambitious. 😁

- Remove `Url.blog_post` function which is never called anywhere
- Remove `planet-local-blogs` source_tree dependency from dune which
  references a directory that doesn't exist
@voodoos
Copy link
Contributor

voodoos commented Feb 11, 2026

Total: 245 touched files. We need to enable agentic-assisted review. I suggest automating the creation of smaller, more targeted PRs. Maybe not one PR per touched file, but at least a unique and narrow kind of change per PR (e.g. a single markdown lint rule, not mere grammar, but rather: "fix its in it's", etc.).

That got me curious and I had a look at #3493, #3494 and #3495 . I don't understand how splitting them would make them easier or less time-consuming to review ? They do mix multiple changes, and it would have been nice to have one commit per such change instead of a single one, but they remain fairly straightforward to review due to the nature of these changes...

Thanks a lot @bbatsov for all the nice improvements :-)

@bbatsov
Copy link
Contributor Author

bbatsov commented Feb 11, 2026

Thanks!

For what is worth - I was splitting the PRs exactly as I would have done so if I had not used any helper tools at all. But I totally get that different people have different perspective on what's the optimal granularity for such changes.

@cuihtlauac
Copy link
Collaborator

cuihtlauac commented Feb 11, 2026

That got me curious and I had a look at #3493, #3494 and #3495 . I don't understand how splitting them would make them easier or less time-consuming to review ? They do mix multiple changes, and it would have been nice to have one commit per such change instead of a single one, but they remain fairly straightforward to review due to the nature of these changes...

Thanks a lot @bbatsov for all the nice improvements :-)

If I ask Claude: “Review PR 3495. List all hunks. Categorise them by similarity.” It produces something like the report copied below (I only copied the summary).

I would like a PR with fewer categories. If I had, I could ask Claude to write a simple script that checks if the hunks are indeed all of the same kind. Review the script. Run it. If it passes, merge the PR. If the script is provided in the PR description, it's even better.

AI SLOP STARTS HERE

# Category Hunk count Risk
1 Linter config (siblings_only) 1 Low
2 ##### heading level fix (conferences) ~14 Low — but changes rendered heading size
3 Bold → proper heading (MD036) 5 Check — changes heading hierarchy & renders differently
4 Code block language specifiers 7 None
5 Emphasis removal from taglines 4 Check — changes rendered appearance (italic → plain)
6 List marker *- 2 None
7 List indentation fix 3 Low — but verify rendered list nesting unchanged
8 Bare URL/email → autolink 3 Low
9 Prose typo fixes ~27 Low — clear corrections
10 Missing final newline 17 None
11 Extra blank line removal ~17 None
12 Trailing whitespace removal ~8 None
13 Stray HTML tag removal 1 None
14 Unused reference link removal 1 None
15 Missing blank line after heading 2 None

Items worth verifying:

  • Category 3 (bold → heading): The roadmap sub-sub-headings become ##### — verify the site CSS renders h5 appropriately and that the heading hierarchy
    (###########) makes sense. The governance.md change promotes **Version 1.0.0...** to ##, which is a more significant structural change.
  • Category 5 (emphasis removal): The persona taglines in 0op_02_users.md lose their italic styling. This is a visual change — verify it's
    intentional/acceptable for the platform users page.
  • Category 9 (typos in YAML frontmatter): The event title fixes ("Continuted" → "Continued") and conference affiliation fix ("Ljublijana, Solvenia") are
    in YAML frontmatter. Verify these fields are not used as lookup keys or URL slugs elsewhere.

@voodoos
Copy link
Contributor

voodoos commented Feb 11, 2026

That's cool, but it is still a 5 minute review for trivial changes :-) and I learned about the <url> markdown shortcut in the process 😅

Sorry for the off-topic discussion, and about the changes of this PR, they also seem good to me.

@cuihtlauac
Copy link
Collaborator

That's cool, but it is still a 5 minute review for trivial changes :-) and I learned about the <url> markdown shortcut in the process 😅

Sorry for the off-topic discussion, and about the changes of this PR, they also seem good to me.

No worries. Actually, it's nice to have that kind of discussion. I see such PR as a nice way to experiment with new ways of reviewing. It's a matter of fairness: if the PR is machine-generated, it should be free to make it machine-reviewable.

@cuihtlauac cuihtlauac merged commit c73822e into ocaml:main Feb 11, 2026
3 of 4 checks passed
@bbatsov
Copy link
Contributor Author

bbatsov commented Feb 11, 2026

It's a matter of fairness: if the PR is machine-generated, it should be free to make it machine-reviewable.

That's fair, but I still don't trust the coding agents that much and prefer to do human reviews as well. But the way things are going we might not really be needed much a few years down the line... 😅

@cuihtlauac
Copy link
Collaborator

cuihtlauac commented Feb 11, 2026

It's a matter of fairness: if the PR is machine-generated, it should be free to make it machine-reviewable.

That's fair, but I still don't trust the coding agents that much and prefer to do human reviews as well. But the way things are going we might not really be needed much a few years down the line... 😅

Not trusting these things (which just make sense) should not imply that we're reduced to reviewers of loads of slop. And anyway, that doesn't scale. Let's just ask the AI to generate extra content, allowing it to be reviewed by tools we already trust.

@voodoos
Copy link
Contributor

voodoos commented Feb 11, 2026

Not trusting these things (which just make sense) should not imply that we're reduced to reviewers of loads of slop. And anyway, that doesn't scale. Let's just ask the AI to generate extra content, allowing it to be reviewed by tools we already trust.

For me it's not even a question of trust. But of necessity or usefulness. @bbatsov's PR is not a slop PR, we have been using automated tools for typo correction forever. Good PRs can be made with the help of LLMs, it doesn't have to be slop. Slop PRs should not be reviewed at all IMO.

Again, I am not saying using an agent to help with review of complex code logic (be it itself generated or not) is not useful or trust-able. But these were typo-fixing PRs, it brings very little value compared to direct review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants