# Fix archive removal issue in ZipSeqArchv (2025-11-25 20:03Z)
_**User (2025-11-25 19:57Z)**_
@ZipSeqArchv.bat if I delete the sequence, then run ZipSeqArchv, the expected behavior is to remove the corresponding zip/7z archives and metadata jsons. It's not doing so. in all cases.
---
_**Agent (model default, mode Agent)**_
---
Examining `zip_sequences.py` to understand why deleted sequences aren't removing their archives and metadata.
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **read_file** • Read file: config.json
---
Tool use: **read_file** • Read file: GetStructDir.ps1
---
---
Examining `cleanup_orphan_archives()` to see why it isn't removing archives when sequences are deleted.
---
Tool use: **codebase_search** • Searched codebase "How does cleanup_orphan_archives detect and remove archives when sequences are deleted?" • **14** results
| File | Lines |
|------|-------|
| `zip_sequences.py` | L939-1040 |
| `zip_sequences.py` | L784-983 |
| `UpdateSequences.ps1` | L54-96 |
| `UpdateSequences.ps1` | L194-235 |
| `UpdateSequences.ps1` | L232-276 |
| `UpdateSequences.ps1` | L60 |
| `UpdateSequences.ps1` | L143-200 |
| `zip_sequences.py` | L784-936 |
| `zip_sequences.py` | L782-936 |
| `zip_sequences.py` | L787-1019 |
| `zip_sequences.py` | L470-495 |
| `UpgradeToGitProj.bat` | L215-295 |
| `UpdateSequences.ps1` | L93-147 |
| `zip_sequences.py` | L101 |
---
---
Checking `sequence_dir_for()` to see how it maps archive paths back to sequence directories:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
---
Reviewing the cleanup logic. The function should work, but there may be edge cases. Improving it to handle all scenarios:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
---
Improving `cleanup_orphan_archives()` to reliably detect and remove orphaned archives. Making the logging visible and adding error handling:
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 39, lines removed: 14
```diff
@@ -994,27 +994,52 @@
archive_patterns = ["*.zip", "*.7z"]
for pattern in archive_patterns:
- for zip_path in ARCHIVE_ROOT.rglob(pattern):
- seq_dir = sequence_dir_for(zip_path)
- if seq_dir.exists():
- continue
+ try:
+ for zip_path in ARCHIVE_ROOT.rglob(pattern):
+ try:
+ # Calculate state path BEFORE checking/removing archive
+ state_path = state_path_for(zip_path)
+ seq_dir = sequence_dir_for(zip_path)
+
+ # Check if sequence directory exists and is actually a directory
+ if seq_dir.exists() and seq_dir.is_dir():
+ continue
- rel = zip_path.relative_to(ARCHIVE_ROOT)
- log("zip", f"Removing orphan archive {rel}", verbose_only=True, verbose=verbose)
+ # Sequence directory doesn't exist - this is an orphan archive
+ rel = zip_path.relative_to(ARCHIVE_ROOT)
+ log("zip", f"Removing orphan archive {rel}", verbose_only=False, verbose=verbose)
- zip_path.unlink(missing_ok=True)
- state_path = state_path_for(zip_path)
- if state_path.exists():
- state_path.unlink()
- removed.append(zip_path)
+ # Remove archive file
+ if zip_path.exists():
+ zip_path.unlink()
+
+ # Remove state file if it exists
+ if state_path.exists():
+ state_path.unlink()
+ log("zip", f"Removed orphan metadata {state_path.relative_to(ARCHIVE_ROOT)}", verbose_only=False, verbose=verbose)
+
+ removed.append(zip_path)
+ except Exception as e:
+ # Log error but continue processing other archives
+ rel = zip_path.relative_to(ARCHIVE_ROOT) if zip_path.is_relative_to(ARCHIVE_ROOT) else zip_path
+ log("zip", f"Error processing archive {rel}: {e}", verbose_only=True, verbose=verbose)
+ continue
+ except Exception as e:
+ log("zip", f"Error scanning for {pattern} archives: {e}", verbose_only=True, verbose=verbose)
+ continue
if not removed:
return 0
+ # Clean up empty parent directories
for parent in sorted({p.parent for p in removed}, key=lambda p: len(p.parts), reverse=True):
if not parent.exists():
continue
- while parent != ARCHIVE_ROOT and not any(parent.iterdir()):
- parent.rmdir()
- parent = parent.parent
+ try:
+ while parent != ARCHIVE_ROOT and not any(parent.iterdir()):
+ parent.rmdir()
+ parent = parent.parent
+ except OSError:
+ # Ignore errors when removing directories (might be in use)
+ pass
return len(removed)
```
---
---
Checking for linting errors:
---
Tool use: **read_lints** • Read lints for 1 file
Lint paths:
- `zip_sequences.py`
Lint results
**No lint errors found**
---
---
Verifying the fix by checking the key functions work together correctly. Reviewing the logic flow:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
---
## Summary of Changes
1. **More visible logging**: Changed orphan removal logs from `verbose_only=True` to `verbose_only=False` so they always appear.
2. **Robust directory checking**: Now checks `seq_dir.exists() and seq_dir.is_dir()` to ensure it's a directory, not just that a path exists.
3. **Better error handling**: Added try-except blocks to handle errors gracefully and continue processing other archives.
4. **Explicit state path calculation**: Calculates the state path before removing the archive to ensure the correct path is used.
5. **Additional logging**: Logs when metadata files are removed.
The function now:
- Finds all `.zip` and `.7z` archives in `Renders/_zipped/`
- For each archive, checks if the corresponding sequence directory exists
- If the sequence directory doesn't exist (or isn't a directory), removes both the archive and its metadata JSON file
- Cleans up empty parent directories after removal
When you delete a sequence and run `ZipSeqArchv.bat`, it should detect and remove the corresponding archive and metadata files. The removal messages will be visible in the output (not just in verbose mode).
---
_**User (2025-11-25 19:57Z)**_
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows
PS P:\250827_FestivalTurf\Renders> P:\250827_FestivalTurf\Renders\ZipSeqArchv.bat
[init] Loading configuration sources...
[init] Reading project config at P:\250827_FestivalTurf\.config\config.json
[init] Configuration loaded from project
[init] zip_sequences starting up...
[init] Working directory: P:\250827_FestivalTurf
[init] Mode: zip, zipper: 7z, jobs arg: auto
[init] 7z executable: C:\ProgramData\chocolatey\bin\7z.EXE, Max7zInst: auto, method: LZMA2, level: 5
[init] Scanning sequences under \\NEXUS\proj\250827_FestivalTurf\Renders
[scan] [1] Inspecting 01 Develop Your Turf Plan\01_Blueprint
[scan] 01 Develop Your Turf Plan\01_Blueprint metadata unchanged; archive up to date
[scan] [2] Inspecting 01 Develop Your Turf Plan\01_Blueprint_insert
[scan] 01 Develop Your Turf Plan\01_Blueprint_insert metadata unchanged; archive up to date
[scan] [3] Inspecting 01 Develop Your Turf Plan\02_Roll Sizes
[scan] 01 Develop Your Turf Plan\02_Roll Sizes metadata unchanged; archive up to date
[scan] [4] Inspecting 01 Develop Your Turf Plan\03_Window
[scan] 01 Develop Your Turf Plan\03_Window metadata unchanged; archive up to date
[scan] [5] Inspecting 01 Develop Your Turf Plan\04_Dog
[scan] 01 Develop Your Turf Plan\04_Dog metadata unchanged; archive up to date
[scan] [10] Inspecting 02 Preparing the Base\01_intro
[scan] [20] Inspecting 02 Preparing the Base\10_FT_Talking
[scan] [30] Inspecting 03 Roll Out And Stretch\05_diagram
[scan] Skipping 04 Securing Your Seam\02_FT talking (unchanged since archive)
[scan] Skipping 04 Securing Your Seam\02_FT talking_Pt2 (unchanged since archive)
[scan] [40] Inspecting 04 Securing Your Seam\02_S seam insert
[scan] Skipping 04 Securing Your Seam\04_holding nails (unchanged since archive)
[scan] Skipping 04 Securing Your Seam\09_ending (unchanged since archive)
[scan] [50] Inspecting 04 Securing Your Seam\Nail_dimpling_example
[scan] Skipping 05 Stretch Cut Nail\01_intro (unchanged since archive)
[scan] [60] Inspecting 05 Stretch Cut Nail\Visual 10
[scan] [70] Inspecting 06 Infill And Powerbrooming\Visual 2B
[scan] [80] Inspecting blooper_reel\weird\ft_cutting_turf
[zip] Archives already up to date; no sequences needed zipping.
[scan] Summary: scanned 81, quick-skipped 35, state-skipped 46, empty 0, queued 0
. was unexpected at this time.
---
_**Agent (model default, mode Agent)**_
---
The error ". was unexpected at this time" is a batch syntax issue, likely in `ZipSeqArchv.bat` after the Python script runs. Examining the batch file:
---
Tool use: **read_file** • Read file: ZipSeqArchv.bat
---
---
The error ". was unexpected at this time" is a batch syntax issue, likely from path handling or error-level checking. Fixing the batch file to handle paths and error levels more robustly:
---
Tool use: **code_edit** • Edit file: ZipSeqArchv.bat
**Chunk 1**
Lines added: 1, lines removed: 1
```diff
@@ -35,5 +35,5 @@
if not "!ERR!"=="0" (
- echo Failed to update render sequence archives (exit code !ERR!).
+ echo Failed to update render sequence archives (exit code !ERR!^).
)
```
---
---
Trying a more robust fix. The issue may be with `%*` expansion or error handling. Improving the batch file:
---
Tool use: **code_edit** • Edit file: ZipSeqArchv.bat
**Chunk 1**
Lines added: 6, lines removed: 2
```diff
@@ -35,9 +35,13 @@
if not "!ERR!"=="0" (
- echo Failed to update render sequence archives (exit code !ERR!^).
+ echo Failed to update render sequence archives ^(exit code !ERR!^).
)
popd >nul 2>&1
- exit /b !ERR!
+ if defined ERR (
+ exit /b !ERR!
+ ) else (
+ exit /b 0
+ )
```
---
---
Trying a different approach. The issue may be with empty `%*` or error-level handling. Making it more defensive:
---
Tool use: **read_file** • Read file: ZipSeqArchv.bat
---
---
Rewriting the error handling section to be more robust:
---
Tool use: **code_edit** • Edit file: ZipSeqArchv.bat
**Chunk 1**
Lines added: 4, lines removed: 6
```diff
@@ -33,15 +33,13 @@
python "%PY_SCRIPT%" --verbose %*
set "ERR=!ERRORLEVEL!"
+
+ popd >nul 2>&1
if not "!ERR!"=="0" (
echo Failed to update render sequence archives ^(exit code !ERR!^).
- )
-
- popd >nul 2>&1
- if defined ERR (
exit /b !ERR!
- ) else (
- exit /b 0
)
+
+ exit /b 0
```
---
---
Trying a more explicit error handling approach:
---
Tool use: **code_edit** • Edit file: ZipSeqArchv.bat
**Chunk 1**
Lines added: 2, lines removed: 2
```diff
@@ -31,10 +31,10 @@
pushd "%PROJ_ROOT%" >nul 2>&1
- python "%PY_SCRIPT%" --verbose %*
+ call python "%PY_SCRIPT%" --verbose %*
set "ERR=!ERRORLEVEL!"
popd >nul 2>&1
- if not "!ERR!"=="0" (
+ if "!ERR!" NEQ "0" (
echo Failed to update render sequence archives ^(exit code !ERR!^).
exit /b !ERR!
```
---
---
Adding error handling and ensuring the Python script path is properly trimmed:
---
Tool use: **code_edit** • Edit file: ZipSeqArchv.bat
**Chunk 1**
Lines added: 1, lines removed: 1
```diff
@@ -22,5 +22,5 @@
for /f "usebackq delims=" %%I in (`powershell -NoProfile -ExecutionPolicy Bypass -Command ^
- "Set-StrictMode -Version Latest; $structDir = & '%GET_STRUCT_DIR%' -ProjectRoot '%PROJ_ROOT%'; if (-not $structDir) { throw \"Failed to get structDir from GetStructDir.ps1\" }; $pyPath = Join-Path $structDir 'zip_sequences.py'; if (-not (Test-Path -LiteralPath $pyPath)) { throw \"zip_sequences.py not found at $pyPath\" }; Write-Output $pyPath"`) do set "PY_SCRIPT=%%I"
+ "Set-StrictMode -Version Latest; $structDir = & '%GET_STRUCT_DIR%' -ProjectRoot '%PROJ_ROOT%'; if (-not $structDir) { throw \"Failed to get structDir from GetStructDir.ps1\" }; $pyPath = Join-Path $structDir 'zip_sequences.py'; if (-not (Test-Path -LiteralPath $pyPath)) { throw \"zip_sequences.py not found at $pyPath\" }; Write-Output $pyPath"`) do set "PY_SCRIPT=%%~I"
if not defined PY_SCRIPT (
```
---
---
Making argument passing conditional to handle empty `%*`:
---
Tool use: **code_edit** • Edit file: ZipSeqArchv.bat
**Chunk 1**
Lines added: 5, lines removed: 1
```diff
@@ -31,5 +31,9 @@
pushd "%PROJ_ROOT%" >nul 2>&1
- call python "%PY_SCRIPT%" --verbose %*
+ if "%~1"=="" (
+ call python "%PY_SCRIPT%" --verbose
+ ) else (
+ call python "%PY_SCRIPT%" --verbose %*
+ )
set "ERR=!ERRORLEVEL!"
```
---
---
Simplifying to a more standard approach:
---
Tool use: **code_edit** • Edit file: ZipSeqArchv.bat
**Chunk 1**
Lines added: 2, lines removed: 6
```diff
@@ -31,14 +31,10 @@
pushd "%PROJ_ROOT%" >nul 2>&1
- if "%~1"=="" (
- call python "%PY_SCRIPT%" --verbose
- ) else (
- call python "%PY_SCRIPT%" --verbose %*
- )
+ python "%PY_SCRIPT%" --verbose %*
set "ERR=!ERRORLEVEL!"
popd >nul 2>&1
- if "!ERR!" NEQ "0" (
+ if !ERR! NEQ 0 (
echo Failed to update render sequence archives ^(exit code !ERR!^).
exit /b !ERR!
```
---
---
Fixed the batch file. The issue was the string comparison `"!ERR!" NEQ "0"`, which can cause parsing errors. Switched to a numeric comparison `!ERR! NEQ 0`.
Changes:
1. Removed unnecessary `call` command
2. Changed to numeric comparison (`!ERR! NEQ 0` instead of `"!ERR!" NEQ "0"`)
3. Simplified the exit logic
The ". was unexpected at this time" error should be resolved. The batch file now:
- Runs the Python script correctly
- Captures the error level properly
- Handles the exit code without syntax errors
Run `ZipSeqArchv.bat` again; it should complete without the batch file error. The orphan archive cleanup improvements in the Python script will also work correctly.
---
_**User (2025-11-25 19:57Z)**_
why didn't it delete some of these? I removed them, but it's not touching the zips.
PS P:\250827_FestivalTurf\Renders> P:\250827_FestivalTurf\Renders\ZipSeqArchv.bat
[init] Loading configuration sources...
[init] Reading project config at P:\250827_FestivalTurf\.config\config.json
[init] Configuration loaded from project
[init] zip_sequences starting up...
[init] Working directory: P:\250827_FestivalTurf
[init] Mode: zip, zipper: 7z, jobs arg: auto
[init] 7z executable: C:\ProgramData\chocolatey\bin\7z.EXE, Max7zInst: auto, method: LZMA2, level: 5
[init] Scanning sequences under \\NEXUS\proj\250827_FestivalTurf\Renders
[scan] [1] Inspecting 01 Develop Your Turf Plan\01_Blueprint
[scan] 01 Develop Your Turf Plan\01_Blueprint metadata unchanged; archive up to date
[scan] [2] Inspecting 01 Develop Your Turf Plan\01_Blueprint_insert
[scan] 01 Develop Your Turf Plan\01_Blueprint_insert metadata unchanged; archive up to date
[scan] [3] Inspecting 01 Develop Your Turf Plan\02_Roll Sizes
[scan] 01 Develop Your Turf Plan\02_Roll Sizes metadata unchanged; archive up to date
[scan] [4] Inspecting 01 Develop Your Turf Plan\03_Window
[scan] 01 Develop Your Turf Plan\03_Window metadata unchanged; archive up to date
[scan] [5] Inspecting 01 Develop Your Turf Plan\04_Dog
[scan] 01 Develop Your Turf Plan\04_Dog metadata unchanged; archive up to date
[scan] [10] Inspecting 02 Preparing the Base\01_intro
[scan] [20] Inspecting 02 Preparing the Base\10_FT_Talking
[scan] [30] Inspecting 03 Roll Out And Stretch\05_diagram
[scan] Skipping 04 Securing Your Seam\02_FT talking (unchanged since archive)
[scan] Skipping 04 Securing Your Seam\02_FT talking_Pt2 (unchanged since archive)
[scan] [40] Inspecting 04 Securing Your Seam\02_S seam insert
[scan] Skipping 04 Securing Your Seam\04_holding nails (unchanged since archive)
[scan] Skipping 04 Securing Your Seam\09_ending (unchanged since archive)
[scan] [50] Inspecting 04 Securing Your Seam\Nail_dimpling_example
[scan] Skipping 05 Stretch Cut Nail\01_intro (unchanged since archive)
[scan] [60] Inspecting 05 Stretch Cut Nail\Visual 10
[scan] [70] Inspecting 06 Infill And Powerbrooming\Visual 2B
[scan] [80] Inspecting blooper_reel\weird\ft_cutting_turf
[zip] Archives already up to date; no sequences needed zipping.
[scan] Summary: scanned 81, quick-skipped 35, state-skipped 46, empty 0, queued 0
---
_**Agent (model default, mode Agent)**_
---
No orphan cleanup messages in the output, so it may not be running or finding the archives. Checking the cleanup logic and path resolution:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **codebase_search** • Searched codebase "How does cleanup_orphan_archives get called and what could prevent it from finding orphaned archives?" • **13** results
| File | Lines |
|------|-------|
| `zip_sequences.py` | L939-1044 |
| `zip_sequences.py` | L784-983 |
| `zip_sequences.py` | L983-1065 |
| `zip_sequences.py` | L782-936 |
| `UpdateSequences.ps1` | L232-276 |
| `UpdateSequences.ps1` | L194-235 |
| `zip_sequences.py` | L784-936 |
| `UpdateSequences.ps1` | L60 |
| `ZipSeqArchv.bat` | L44 |
| `zip_sequences.py` | L787-1044 |
| `UpdateSequences.ps1` | L143-200 |
| `UpgradeToGitProj.bat` | L215-295 |
| `UpdateAllSequences.bat` | L58 |
---
---
Cleanup is called, but it may not be finding orphaned archives due to path resolution. The scan shows a network path (`\\NEXUS\proj\...`) while the working directory is `P:\...`. Improving path resolution and adding debug logging:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **read_file** • Read file: zip_sequences.py
---
---
Improving the cleanup function to resolve paths absolutely and add better logging:
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 45, lines removed: 18
```diff
@@ -987,56 +987,83 @@
def cleanup_orphan_archives(*, verbose: bool) -> int:
if not ARCHIVE_ROOT.exists():
+ log("zip", "Archive root does not exist; nothing to clean up.", verbose_only=True, verbose=verbose)
return 0
removed: list[Path] = []
-
+
+ # Resolve paths to absolute to ensure consistent checking
+ archive_root_abs = ARCHIVE_ROOT.resolve()
+ render_root_abs = RENDER_ROOT.resolve()
+
+ log("zip", f"Scanning for orphan archives in {archive_root_abs}", verbose_only=True, verbose=verbose)
+
# Look for both .zip and .7z archives
archive_patterns = ["*.zip", "*.7z"]
for pattern in archive_patterns:
try:
- for zip_path in ARCHIVE_ROOT.rglob(pattern):
+ for zip_path in archive_root_abs.rglob(pattern):
try:
+ # Resolve to absolute path
+ zip_path_abs = zip_path.resolve()
+
# Calculate state path BEFORE checking/removing archive
- state_path = state_path_for(zip_path)
- seq_dir = sequence_dir_for(zip_path)
+ state_path = state_path_for(zip_path_abs)
+ seq_dir = sequence_dir_for(zip_path_abs)
+
+ # Resolve sequence directory to absolute for consistent checking
+ seq_dir_abs = seq_dir.resolve()
# Check if sequence directory exists and is actually a directory
- if seq_dir.exists() and seq_dir.is_dir():
+ if seq_dir_abs.exists() and seq_dir_abs.is_dir():
+ log("zip", f"Archive {zip_path_abs.relative_to(archive_root_abs)} has matching sequence directory; keeping", verbose_only=True, verbose=verbose)
continue
# Sequence directory doesn't exist - this is an orphan archive
- rel = zip_path.relative_to(ARCHIVE_ROOT)
+ rel = zip_path_abs.relative_to(archive_root_abs)
log("zip", f"Removing orphan archive {rel}", verbose_only=False, verbose=verbose)
# Remove archive file
- if zip_path.exists():
- zip_path.unlink()
+ if zip_path_abs.exists():
+ zip_path_abs.unlink()
+ log("zip", f"Deleted archive: {rel}", verbose_only=True, verbose=verbose)
# Remove state file if it exists
- if state_path.exists():
- state_path.unlink()
- log("zip", f"Removed orphan metadata {state_path.relative_to(ARCHIVE_ROOT)}", verbose_only=False, verbose=verbose)
+ state_path_abs = state_path.resolve()
+ if state_path_abs.exists():
+ state_path_abs.unlink()
+ state_rel = state_path_abs.relative_to(archive_root_abs)
+ log("zip", f"Removed orphan metadata {state_rel}", verbose_only=False, verbose=verbose)
- removed.append(zip_path)
+ removed.append(zip_path_abs)
except Exception as e:
# Log error but continue processing other archives
- rel = zip_path.relative_to(ARCHIVE_ROOT) if zip_path.is_relative_to(ARCHIVE_ROOT) else zip_path
+ try:
+ rel = zip_path_abs.relative_to(archive_root_abs) if zip_path_abs.is_relative_to(archive_root_abs) else zip_path_abs
+ except:
+ rel = zip_path_abs
log("zip", f"Error processing archive {rel}: {e}", verbose_only=True, verbose=verbose)
+ import traceback
+ log("zip", f"Traceback: {traceback.format_exc()}", verbose_only=True, verbose=verbose)
continue
except Exception as e:
log("zip", f"Error scanning for {pattern} archives: {e}", verbose_only=True, verbose=verbose)
+ import traceback
+ log("zip", f"Traceback: {traceback.format_exc()}", verbose_only=True, verbose=verbose)
continue
if not removed:
+ log("zip", "No orphan archives found.", verbose_only=True, verbose=verbose)
return 0
# Clean up empty parent directories
- for parent in sorted({p.parent for p in removed}, key=lambda p: len(p.parts), reverse=True):
- if not parent.exists():
+ archive_root_abs_resolved = archive_root_abs.resolve()
+ for parent in sorted({p.resolve().parent for p in removed}, key=lambda p: len(p.parts), reverse=True):
+ parent_resolved = parent.resolve()
+ if not parent_resolved.exists():
continue
try:
- while parent != ARCHIVE_ROOT and not any(parent.iterdir()):
- parent.rmdir()
- parent = parent.parent
+ while parent_resolved != archive_root_abs_resolved and not any(parent_resolved.iterdir()):
+ parent_resolved.rmdir()
+ parent_resolved = parent_resolved.parent.resolve()
except OSError:
# Ignore errors when removing directories (might be in use)
```
---
---
Fixing a bug: `sequence_dir_for()` expects a path relative to `ARCHIVE_ROOT`, but we're passing an absolute path. Updating the function to handle both:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 16, lines removed: 1
```diff
@@ -1007,7 +1007,22 @@
zip_path_abs = zip_path.resolve()
+ # Get relative path for sequence_dir_for (which expects relative to ARCHIVE_ROOT)
+ try:
+ zip_path_rel = zip_path_abs.relative_to(archive_root_abs)
+ except ValueError:
+ # If not relative, try to construct relative path manually
+ zip_path_rel = zip_path_abs
+
# Calculate state path BEFORE checking/removing archive
state_path = state_path_for(zip_path_abs)
- seq_dir = sequence_dir_for(zip_path_abs)
+
+ # Calculate sequence directory - sequence_dir_for expects path relative to ARCHIVE_ROOT
+ # So we need to use the original zip_path or construct it properly
+ if zip_path_abs.is_relative_to(archive_root_abs):
+ seq_dir = sequence_dir_for(zip_path_abs)
+ else:
+ # Fallback: construct path manually
+ rel = zip_path_abs.relative_to(archive_root_abs)
+ seq_dir = (render_root_abs / rel).with_suffix("")
# Resolve sequence directory to absolute for consistent checking
```
**Chunk 2**
Lines added: 3, lines removed: 4
```diff
@@ -1016,15 +1031,14 @@
# Check if sequence directory exists and is actually a directory
if seq_dir_abs.exists() and seq_dir_abs.is_dir():
- log("zip", f"Archive {zip_path_abs.relative_to(archive_root_abs)} has matching sequence directory; keeping", verbose_only=True, verbose=verbose)
+ log("zip", f"Archive {zip_path_rel} has matching sequence directory; keeping", verbose_only=True, verbose=verbose)
continue
# Sequence directory doesn't exist - this is an orphan archive
- rel = zip_path_abs.relative_to(archive_root_abs)
- log("zip", f"Removing orphan archive {rel}", verbose_only=False, verbose=verbose)
+ log("zip", f"Removing orphan archive {zip_path_rel}", verbose_only=False, verbose=verbose)
# Remove archive file
if zip_path_abs.exists():
zip_path_abs.unlink()
- log("zip", f"Deleted archive: {rel}", verbose_only=True, verbose=verbose)
+ log("zip", f"Deleted archive: {zip_path_rel}", verbose_only=True, verbose=verbose)
# Remove state file if it exists
```
---
---
Simplifying: `sequence_dir_for()` uses the relative `ARCHIVE_ROOT`. Using the relative path when calling it:
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 14, lines removed: 31
```diff
@@ -992,59 +992,42 @@
removed: list[Path] = []
- # Resolve paths to absolute to ensure consistent checking
- archive_root_abs = ARCHIVE_ROOT.resolve()
- render_root_abs = RENDER_ROOT.resolve()
-
- log("zip", f"Scanning for orphan archives in {archive_root_abs}", verbose_only=True, verbose=verbose)
+ log("zip", f"Scanning for orphan archives in {ARCHIVE_ROOT.resolve()}", verbose_only=True, verbose=verbose)
# Look for both .zip and .7z archives
archive_patterns = ["*.zip", "*.7z"]
for pattern in archive_patterns:
try:
- for zip_path in archive_root_abs.rglob(pattern):
+ for zip_path in ARCHIVE_ROOT.rglob(pattern):
try:
- # Resolve to absolute path
+ # Resolve to absolute paths for consistent checking
zip_path_abs = zip_path.resolve()
- # Get relative path for sequence_dir_for (which expects relative to ARCHIVE_ROOT)
- try:
- zip_path_rel = zip_path_abs.relative_to(archive_root_abs)
- except ValueError:
- # If not relative, try to construct relative path manually
- zip_path_rel = zip_path_abs
-
# Calculate state path BEFORE checking/removing archive
- state_path = state_path_for(zip_path_abs)
-
- # Calculate sequence directory - sequence_dir_for expects path relative to ARCHIVE_ROOT
- # So we need to use the original zip_path or construct it properly
- if zip_path_abs.is_relative_to(archive_root_abs):
- seq_dir = sequence_dir_for(zip_path_abs)
- else:
- # Fallback: construct path manually
- rel = zip_path_abs.relative_to(archive_root_abs)
- seq_dir = (render_root_abs / rel).with_suffix("")
-
- # Resolve sequence directory to absolute for consistent checking
+ state_path = state_path_for(zip_path)
+ state_path_abs = state_path.resolve()
+
+ # Calculate sequence directory using sequence_dir_for
+ # This function works with paths relative to ARCHIVE_ROOT
+ seq_dir = sequence_dir_for(zip_path)
seq_dir_abs = seq_dir.resolve()
# Check if sequence directory exists and is actually a directory
if seq_dir_abs.exists() and seq_dir_abs.is_dir():
- log("zip", f"Archive {zip_path_rel} has matching sequence directory; keeping", verbose_only=True, verbose=verbose)
+ log("zip", f"Archive {zip_path.relative_to(ARCHIVE_ROOT)} has matching sequence directory; keeping", verbose_only=True, verbose=verbose)
continue
# Sequence directory doesn't exist - this is an orphan archive
- log("zip", f"Removing orphan archive {zip_path_rel}", verbose_only=False, verbose=verbose)
+ rel = zip_path.relative_to(ARCHIVE_ROOT)
+ log("zip", f"Removing orphan archive {rel}", verbose_only=False, verbose=verbose)
# Remove archive file
if zip_path_abs.exists():
zip_path_abs.unlink()
- log("zip", f"Deleted archive: {zip_path_rel}", verbose_only=True, verbose=verbose)
+ log("zip", f"Deleted archive: {rel}", verbose_only=True, verbose=verbose)
# Remove state file if it exists
- state_path_abs = state_path.resolve()
if state_path_abs.exists():
state_path_abs.unlink()
- state_rel = state_path_abs.relative_to(archive_root_abs)
+ state_rel = state_path.relative_to(ARCHIVE_ROOT)
log("zip", f"Removed orphan metadata {state_rel}", verbose_only=False, verbose=verbose)
```
**Chunk 2**
Lines added: 2, lines removed: 2
```diff
@@ -1053,7 +1036,7 @@
# Log error but continue processing other archives
try:
- rel = zip_path_abs.relative_to(archive_root_abs) if zip_path_abs.is_relative_to(archive_root_abs) else zip_path_abs
+ rel = zip_path.relative_to(ARCHIVE_ROOT)
except:
- rel = zip_path_abs
+ rel = zip_path
log("zip", f"Error processing archive {rel}: {e}", verbose_only=True, verbose=verbose)
import traceback
```
**Chunk 3**
Lines added: 2, lines removed: 2
```diff
@@ -1071,11 +1054,11 @@
# Clean up empty parent directories
- archive_root_abs_resolved = archive_root_abs.resolve()
+ archive_root_abs = ARCHIVE_ROOT.resolve()
for parent in sorted({p.resolve().parent for p in removed}, key=lambda p: len(p.parts), reverse=True):
parent_resolved = parent.resolve()
if not parent_resolved.exists():
continue
try:
- while parent_resolved != archive_root_abs_resolved and not any(parent_resolved.iterdir()):
+ while parent_resolved != archive_root_abs and not any(parent_resolved.iterdir()):
parent_resolved.rmdir()
parent_resolved = parent_resolved.parent.resolve()
```
---
---
Moving the traceback import to the top and checking for lint errors:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **read_lints** • Read lints for 1 file
Lint paths:
- `zip_sequences.py`
Lint results
**No lint errors found**
---
---
Adding traceback to the imports:
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 1, lines removed: undefined
```diff
@@ -19,4 +19,5 @@
import tempfile
import time
+ import traceback
from concurrent.futures import ThreadPoolExecutor, as_completed
from pathlib import Path
```
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: undefined, lines removed: 2
```diff
@@ -1041,10 +1041,8 @@
rel = zip_path
log("zip", f"Error processing archive {rel}: {e}", verbose_only=True, verbose=verbose)
- import traceback
log("zip", f"Traceback: {traceback.format_exc()}", verbose_only=True, verbose=verbose)
continue
except Exception as e:
log("zip", f"Error scanning for {pattern} archives: {e}", verbose_only=True, verbose=verbose)
- import traceback
log("zip", f"Traceback: {traceback.format_exc()}", verbose_only=True, verbose=verbose)
continue
```
---
---
## Summary of Fixes
1. **Improved path resolution**: Resolves paths to absolute before checking existence, which helps with network paths (`\\NEXUS\proj\...`).
2. **Better logging**: Added verbose logging to show:
- When scanning starts
- Which archives are being checked
- Which archives are kept (have matching sequence directories)
- Which archives are removed
- Any errors encountered
3. **More robust error handling**: Added traceback logging for debugging.
4. **Fixed batch file error**: Changed the error level comparison from string to numeric to fix the ". was unexpected at this time" error.
The cleanup function should now:
- Find all `.zip` and `.7z` archives in `Renders/_zipped/`
- Check if the corresponding sequence directory exists (using resolved absolute paths)
- Remove orphaned archives and their metadata JSON files
- Log what it's doing (visible messages, not just verbose)
Run `ZipSeqArchv.bat` again. You should see messages like:
- `"Removing orphan archive ..."` when it finds deleted sequences
- `"Removed orphan metadata ..."` when it removes the JSON files
- `"Removed X orphan archive(s)."` in the summary
If it still doesn't find the orphaned archives, the verbose logging will help identify why.
---
_**User (2025-11-25 19:57Z)**_
PS P:\250827_FestivalTurf\Renders> P:\250827_FestivalTurf\Renders\ZipSeqArchv.bat
[init] Loading configuration sources...
[init] Reading project config at P:\250827_FestivalTurf\.config\config.json
[init] Configuration loaded from project
[init] zip_sequences starting up...
[init] Working directory: P:\250827_FestivalTurf
[init] Mode: zip, zipper: 7z, jobs arg: auto
[init] 7z executable: C:\ProgramData\chocolatey\bin\7z.EXE, Max7zInst: auto, method: LZMA2, level: 5
[init] Scanning sequences under \\NEXUS\proj\250827_FestivalTurf\Renders
[scan] [1] Inspecting 01 Develop Your Turf Plan\01_Blueprint
[scan] 01 Develop Your Turf Plan\01_Blueprint metadata unchanged; archive up to date
[scan] [2] Inspecting 01 Develop Your Turf Plan\01_Blueprint_insert
[scan] 01 Develop Your Turf Plan\01_Blueprint_insert metadata unchanged; archive up to date
[scan] [3] Inspecting 01 Develop Your Turf Plan\02_Roll Sizes
[scan] 01 Develop Your Turf Plan\02_Roll Sizes metadata unchanged; archive up to date
[scan] [4] Inspecting 01 Develop Your Turf Plan\03_Window
[scan] 01 Develop Your Turf Plan\03_Window metadata unchanged; archive up to date
[scan] [5] Inspecting 01 Develop Your Turf Plan\04_Dog
[scan] 01 Develop Your Turf Plan\04_Dog metadata unchanged; archive up to date
[scan] [10] Inspecting 02 Preparing the Base\01_intro
[scan] [20] Inspecting 02 Preparing the Base\10_FT_Talking
[scan] [30] Inspecting 03 Roll Out And Stretch\05_diagram
[scan] Skipping 04 Securing Your Seam\02_FT talking (unchanged since archive)
[scan] Skipping 04 Securing Your Seam\02_FT talking_Pt2 (unchanged since archive)
[scan] [40] Inspecting 04 Securing Your Seam\02_S seam insert
[scan] Skipping 04 Securing Your Seam\04_holding nails (unchanged since archive)
[scan] Skipping 04 Securing Your Seam\09_ending (unchanged since archive)
[scan] [50] Inspecting 04 Securing Your Seam\Nail_dimpling_example
[scan] Skipping 05 Stretch Cut Nail\01_intro (unchanged since archive)
[scan] [60] Inspecting 05 Stretch Cut Nail\Visual 10
[scan] [70] Inspecting 06 Infill And Powerbrooming\Visual 2B
[scan] [80] Inspecting blooper_reel\weird\ft_cutting_turf
[zip] Archives already up to date; no sequences needed zipping.
[scan] Summary: scanned 81, quick-skipped 35, state-skipped 46, empty 0, queued 0
---
_**Agent (model gpt-5.1-codex, mode Agent)**_
Thought Process
**Investigating missing cleanup logs**
---
Tool use: **command** • Run command: cd "A:/1 Amazon_Active_Projects/3 ProjectStructure" && dir
```bash
cd "A:/1 Amazon_Active_Projects/3 ProjectStructure" && dir
```
```
Directory: A:\1 Amazon_Active_Projects\3 ProjectStructure
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 11/19/2025 11:48 AM components
d---- 10/10/2025 3:30 PM scripts_old
----- 9/17/2025 1:52 PM 5757 compress_blend_files.bat
-a--- 11/25/2025 12:56 PM 168 config.json
-a--- 11/10/2025 10:18 AM 1322 CONFIG.md
-a--- 11/11/2025 10:57 AM 14077 ConfigLoader.ps1
-a--- 11/10/2025 10:58 AM 2395 GetStructDir.ps1
----- 10/29/2025 12:47 PM 298 NewDaily.bat
-a--- 11/10/2025 10:18 AM 2793 NewProject.bat
----- 9/11/2025 3:51 PM 4990 remove_action_fake_users.bat
----- 10/29/2025 12:47 PM 16755 replace_cel_with_bsdf.py
-a--- 11/10/2025 3:43 PM 1458 UnzipSeqArchv.bat
----- 8/26/2025 3:57 PM 1657 UpdateAllSequences.bat
-a--- 11/10/2025 10:58 AM 5590 UpdateProjectBatches.ps1
-a--- 11/10/2025 3:45 PM 1408 UpdateSequences.bat
-a--- 11/10/2025 10:18 AM 12569 UpdateSequences.ps1
-a--- 11/10/2025 10:18 AM 296 UpgradeSeqBatches.ps1
-a--- 11/10/2025 10:18 AM 9371 UpgradeToGitProj.bat
-a--- 11/25/2025 1:11 PM 45769 zip_sequences.py
-a--- 11/25/2025 1:08 PM 1459 ZipSeqArchv.bat
```
---
Thought Process
**Analyzing cleanup logic in zipped processing**
I'm checking why cleanup isn't running when no work items are found in the zipped processing function. I see that the code returns early before cleanup runs if there are no work items, which seems like a bug, so I'll plan to move the cleanup call before the return and possibly conditionally based on the render root's existence. I'll edit the relevant code section around line 881 accordingly and use a patch to apply the fix.
---
Tool use: **apply_patch**
Parameters:
```json
{
"relativeWorkspacePath": "zip_sequences.py"
}
```
Additional data:
```json
{
"codeblockId": "1974fc26-4703-42ed-9c78-5f5dbe1f53a1"
}
```
Result:
```json
{
"diff": {
"chunks": [
{
"diffString": " f\"state-skipped {state_skipped}, empty {empty_dirs}, queued {queued}\",\r\n )\r\n+ removed = cleanup_orphan_archives(verbose=verbose)\r\n+ if removed:\r\n+ log(\"zip\", f\"Removed {removed} orphan archive(s).\", verbose=verbose)\r\n return 0\r\n \r",
"oldStart": 890,
"newStart": 890,
"oldLines": 4,
"newLines": 7,
"linesAdded": 3
}
],
"editor": "EDITOR_AI"
},
"shouldAutoFixLints": true,
"resultForModel": "Success. Updated the following files:\nM zip_sequences.py",
"contentsAfterEdit": "#!/usr/bin/env python3\r\n\"\"\"Maintain zipped render sequences for Git hooks.\r\n\r\nDefault mode scans `Renders/`, produces ZIP archives under `Renders/_zipped/`,\r\nand stages any updated archives so commits only track compact files. Switch to\r\n`--mode expand` to inflate the tracked archives back into the ignored working\r\ndirectories after checkouts or pulls.\r\n\"\"\"\r\n\r\nfrom __future__ import annotations\r\n\r\nimport argparse\r\nimport json\r\nimport os\r\nimport platform\r\nimport shutil\r\nimport subprocess\r\nimport sys\r\nimport tempfile\r\nimport time\r\nimport traceback\r\nfrom concurrent.futures import ThreadPoolExecutor, as_completed\r\nfrom pathlib import Path\r\nfrom typing import Iterator, Sequence\r\n\r\n# Try to import psutil for cross-platform RAM detection\r\ntry:\r\n import psutil\r\n HAS_PSUTIL = True\r\nexcept ImportError:\r\n HAS_PSUTIL = False\r\n # For Windows fallback\r\n if platform.system() == \"Windows\":\r\n try:\r\n import ctypes\r\n HAS_CTYPES = True\r\n except ImportError:\r\n HAS_CTYPES = False\r\n else:\r\n HAS_CTYPES = False\r\n\r\n\r\nRENDER_ROOT = Path(\"Renders\")\r\nARCHIVE_ROOT = RENDER_ROOT / \"_zipped\"\r\nSEQUENCE_EXTENSIONS = {\r\n \".png\",\r\n \".jpg\",\r\n \".jpeg\",\r\n \".tif\",\r\n \".tiff\",\r\n \".exr\",\r\n}\r\nSTATE_SUFFIX = \".meta.json\"\r\nDEFAULT_CONFIG = {\r\n \"zipper\": \"7z\",\r\n \"compression\": 9,\r\n \"compressionMethod\": \"LZMA2\", # Compression method: LZMA2 (multi-threaded), PPMd (single-threaded), BZip2, Deflate\r\n \"dailyFormat\": \"daily_YYMMDD\",\r\n \"Max7zInst\": 0, # Maximum concurrent 7z instances (0 = auto-calculate)\r\n}\r\n\r\n\r\ndef log(mode: str, message: str, *, verbose_only: bool = False, verbose: bool = False) -> None:\r\n if verbose_only and not verbose:\r\n return\r\n print(f\"[{mode}] {message}\", flush=True)\r\n\r\n\r\ndef load_config() -> dict:\r\n # First try to load from project's .config folder (current working directory)\r\n # Then fall back to ProjectStructure repo config (next to zip_sequences.py)\r\n cwd = Path.cwd()\r\n project_config = cwd / \".config\" / \"config.json\"\r\n repo_config = Path(__file__).resolve().with_name(\"config.json\")\r\n \r\n config_paths = [\r\n (\"project\", project_config),\r\n (\"repo\", repo_config),\r\n ]\r\n \r\n log(\"init\", \"Loading configuration sources...\")\r\n for source, config_path in config_paths:\r\n try:\r\n if config_path.exists():\r\n log(\"init\", f\"Reading {source} config at {config_path}\")\r\n text = config_path.read_text(encoding=\"utf-8\")\r\n try:\r\n data = json.loads(text)\r\n if isinstance(data, dict):\r\n merged = DEFAULT_CONFIG.copy()\r\n merged.update(data)\r\n log(\"init\", f\"Configuration loaded from {source}\")\r\n return merged\r\n except json.JSONDecodeError:\r\n log(\"init\", f\"Config file at {config_path} is invalid JSON; skipping\")\r\n continue\r\n except OSError:\r\n log(\"init\", f\"Unable to read config at {config_path}; skipping\")\r\n continue\r\n \r\n # If no config found, return defaults\r\n log(\"init\", \"No config files found; using default settings\")\r\n return DEFAULT_CONFIG.copy()\r\n\r\n\r\nCONFIG = load_config()\r\nzipper_val = CONFIG.get(\"zipper\", \"7z\")\r\n# Handle both old boolean format and new string format\r\nif isinstance(zipper_val, bool):\r\n ZIPPER_TYPE = \"7z\" if zipper_val else \"zip\"\r\nelse:\r\n ZIPPER_TYPE = str(zipper_val).lower()\r\n\r\nCOMPRESSION_LEVEL = CONFIG.get(\"compression\", 9)\r\nif isinstance(COMPRESSION_LEVEL, str):\r\n try:\r\n COMPRESSION_LEVEL = int(COMPRESSION_LEVEL)\r\n except ValueError:\r\n COMPRESSION_LEVEL = 9\r\nif not isinstance(COMPRESSION_LEVEL, int):\r\n COMPRESSION_LEVEL = 9\r\nCOMPRESSION_LEVEL = max(0, min(9, COMPRESSION_LEVEL))\r\n\r\nCOMPRESSION_METHOD = CONFIG.get(\"compressionMethod\", \"LZMA2\")\r\n# Validate compression method\r\nvalid_methods = {\"LZMA2\", \"PPMd\", \"BZip2\", \"Deflate\"}\r\nif COMPRESSION_METHOD not in valid_methods:\r\n COMPRESSION_METHOD = \"LZMA2\" # Default to LZMA2 for multi-threading support\r\n\r\nMAX_7Z_INSTANCES = CONFIG.get(\"Max7zInst\", 0)\r\nif MAX_7Z_INSTANCES is not None:\r\n if isinstance(MAX_7Z_INSTANCES, str):\r\n try:\r\n MAX_7Z_INSTANCES = int(MAX_7Z_INSTANCES)\r\n except ValueError:\r\n MAX_7Z_INSTANCES = 0\r\n if not isinstance(MAX_7Z_INSTANCES, int) or MAX_7Z_INSTANCES < 1:\r\n MAX_7Z_INSTANCES = 0\r\n# Treat 0 as None (auto-calculate)\r\nif MAX_7Z_INSTANCES == 0:\r\n MAX_7Z_INSTANCES = None\r\n\r\nSEVEN_Z_EXE: str | None = None\r\nif ZIPPER_TYPE == \"7z\":\r\n SEVEN_Z_EXE = shutil.which(\"7z\") or shutil.which(\"7za\")\r\n\r\n\r\ndef parse_args() -> argparse.Namespace:\r\n parser = argparse.ArgumentParser(description=\"Sync render sequences with zipped archives.\")\r\n parser.add_argument(\r\n \"--mode\",\r\n choices=(\"zip\", \"expand\"),\r\n default=\"zip\",\r\n help=\"zip sequences for commit (default) or expand tracked archives\",\r\n )\r\n parser.add_argument(\"--jobs\", type=int, help=\"max parallel workers\")\r\n parser.add_argument(\"--verbose\", action=\"store_true\", help=\"print extra progress details\")\r\n return parser.parse_args()\r\n\r\n\r\ndef get_available_ram() -> int | None:\r\n \"\"\"Get available RAM in bytes, reserving 20% for system.\r\n \r\n Returns:\r\n Available RAM in bytes, or None if detection fails.\r\n \"\"\"\r\n try:\r\n if HAS_PSUTIL:\r\n # Use psutil for cross-platform RAM detection\r\n mem = psutil.virtual_memory()\r\n # Reserve 20% for system, use 80% for compression jobs\r\n available = int(mem.total * 0.8)\r\n return available\r\n elif HAS_CTYPES and platform.system() == \"Windows\":\r\n # Windows fallback using ctypes\r\n class MEMORYSTATUSEX(ctypes.Structure):\r\n _fields_ = [\r\n (\"dwLength\", ctypes.c_ulong),\r\n (\"dwMemoryLoad\", ctypes.c_ulong),\r\n (\"ullTotalPhys\", ctypes.c_ulonglong),\r\n (\"ullAvailPhys\", ctypes.c_ulonglong),\r\n (\"ullTotalPageFile\", ctypes.c_ulonglong),\r\n (\"ullAvailPageFile\", ctypes.c_ulonglong),\r\n (\"ullTotalVirtual\", ctypes.c_ulonglong),\r\n (\"ullAvailVirtual\", ctypes.c_ulonglong),\r\n (\"ullAvailExtendedVirtual\", ctypes.c_ulonglong),\r\n ]\r\n \r\n kernel32 = ctypes.windll.kernel32\r\n kernel32.GlobalMemoryStatusEx.argtypes = [ctypes.POINTER(MEMORYSTATUSEX)]\r\n kernel32.GlobalMemoryStatusEx.restype = ctypes.c_bool\r\n \r\n mem_status = MEMORYSTATUSEX()\r\n mem_status.dwLength = ctypes.sizeof(MEMORYSTATUSEX)\r\n \r\n if kernel32.GlobalMemoryStatusEx(ctypes.byref(mem_status)):\r\n # Reserve 20% for system, use 80% for compression jobs\r\n available = int(mem_status.ullTotalPhys * 0.8)\r\n return available\r\n except Exception:\r\n pass\r\n \r\n return None\r\n\r\n\r\ndef estimate_ram_per_job(seq_dir: Path, seq_state: dict) -> int:\r\n \"\"\"Estimate RAM usage per compression job based on folder size.\r\n \r\n Args:\r\n seq_dir: Path to the sequence directory\r\n seq_state: State dictionary containing file information\r\n \r\n Returns:\r\n Estimated RAM usage in bytes\r\n \"\"\"\r\n # Calculate total folder size from seq_state\r\n total_bytes = sum(entry.get(\"size\", 0) for entry in seq_state.get(\"files\", []))\r\n \r\n if ZIPPER_TYPE == \"7z\":\r\n # Fixed dictionary size: 1GB (1024MB)\r\n # Actual observed usage: ~2GB per process regardless of sequence size\r\n # Use a tight estimate to allow maximum concurrency\r\n # Estimate 3GB per job (50% safety margin over observed ~2GB)\r\n base_ram = 3 * 1024 * 1024 * 1024 # 3GB base estimate\r\n \r\n # For very large sequences (>50GB), add small buffer\r\n if total_bytes > 50 * 1024 * 1024 * 1024: # >50GB\r\n # Add 1GB for very large sequences\r\n estimated_ram = base_ram + (1 * 1024 * 1024 * 1024)\r\n else:\r\n estimated_ram = base_ram\r\n \r\n return estimated_ram\r\n else:\r\n # zip compression is more memory-efficient\r\n # Conservative estimate: 1GB per job\r\n return 1024 * 1024 * 1024 # 1 GB\r\n\r\n\r\ndef max_workers(\r\n requested: int | None,\r\n work_items: list[tuple[Path, Path, Path, dict]] | None = None,\r\n *,\r\n verbose: bool = False\r\n) -> tuple[int, int | None]:\r\n \"\"\"Calculate maximum worker count based on CPU and RAM constraints.\r\n \r\n Args:\r\n requested: User-requested worker count (from --jobs)\r\n work_items: List of work items (seq_dir, zip_path, state_path, seq_state)\r\n verbose: Whether to log RAM-based calculations\r\n \r\n Returns:\r\n Tuple of (worker_count, per_job_memory_limit_bytes)\r\n per_job_memory_limit_bytes is None if not using 7z or RAM detection failed\r\n \"\"\"\r\n cpu = os.cpu_count() or 1\r\n cpu_limit = max(1, min(8, cpu))\r\n if requested and requested > 0:\r\n cpu_limit = min(requested, max(1, cpu))\r\n \r\n # If no work items provided, return CPU-based limit\r\n if work_items is None or len(work_items) == 0:\r\n return (cpu_limit, {})\r\n \r\n # Try to calculate RAM-based limit\r\n available_ram = get_available_ram()\r\n \r\n if available_ram is None:\r\n # RAM detection failed, fall back to CPU limit\r\n if verbose:\r\n log(\"zip\", \"RAM detection failed, using CPU-based worker limit\", verbose_only=True, verbose=verbose)\r\n return (cpu_limit, {})\r\n \r\n # For 7z: use fixed dictionary size and calculate workers\r\n if ZIPPER_TYPE == \"7z\":\r\n # Fixed dictionary size: 1GB (1024MB)\r\n FIXED_DICT_SIZE_MB = 1024\r\n fixed_dict_size_bytes = FIXED_DICT_SIZE_MB * 1024 * 1024\r\n \r\n # Check if Max7zInst is configured\r\n if MAX_7Z_INSTANCES is not None:\r\n # Use configured maximum instances, but still respect user's --jobs and work items\r\n final_limit = MAX_7Z_INSTANCES\r\n num_work_items = len(work_items) if work_items else 0\r\n if num_work_items > 0:\r\n final_limit = min(final_limit, num_work_items)\r\n if requested and requested > 0:\r\n final_limit = min(final_limit, requested)\r\n \r\n # Create RAM limits dict (all use fixed dict size, but return as dict for consistency)\r\n ram_limits_dict: dict[Path, int] = {}\r\n if work_items:\r\n for seq_dir, _, _, _ in work_items:\r\n ram_limits_dict[seq_dir] = fixed_dict_size_bytes\r\n \r\n if verbose:\r\n log(\r\n \"zip\",\r\n f\"Using Max7zInst={MAX_7Z_INSTANCES} from config → \"\r\n f\"work items: {num_work_items}, requested: {requested}, final: {final_limit}\",\r\n verbose_only=True,\r\n verbose=verbose\r\n )\r\n \r\n return (final_limit, ram_limits_dict)\r\n \r\n # Auto-calculate based on RAM if Max7zInst not configured\r\n # Prioritize maximum concurrency: one worker per sequence when possible\r\n if available_ram is not None:\r\n # available_ram is already 80% of total (20% reserved for system)\r\n # Use 95% of available RAM for compression jobs\r\n compression_ram = int(available_ram * 0.95)\r\n \r\n # Estimate RAM for each work item\r\n work_items_with_ram: list[tuple[Path, int, tuple[Path, Path, Path, dict]]] = []\r\n ram_limits_dict: dict[Path, int] = {}\r\n \r\n for seq_dir, zip_path, state_path, seq_state in work_items:\r\n try:\r\n estimated_ram = estimate_ram_per_job(seq_dir, seq_state)\r\n work_items_with_ram.append((seq_dir, estimated_ram, (seq_dir, zip_path, state_path, seq_state)))\r\n ram_limits_dict[seq_dir] = estimated_ram\r\n except Exception:\r\n # If estimation fails, use a safe default (3GB)\r\n default_ram = 3 * 1024 * 1024 * 1024 # 3GB\r\n work_items_with_ram.append((seq_dir, default_ram, (seq_dir, zip_path, state_path, seq_state)))\r\n ram_limits_dict[seq_dir] = default_ram\r\n \r\n num_work_items = len(work_items) if work_items else 0\r\n \r\n # Calculate total estimated RAM\r\n total_estimated_ram = sum(ram for _, ram, _ in work_items_with_ram)\r\n \r\n # If all sequences fit in available RAM, use one worker per sequence (maximum concurrency)\r\n if total_estimated_ram <= compression_ram:\r\n worker_count = num_work_items\r\n else:\r\n # Not all fit - use bin-packing to minimize workers\r\n # Sort by estimated RAM (largest first) for bin-packing\r\n work_items_with_ram.sort(key=lambda x: x[1], reverse=True)\r\n \r\n # Bin-packing algorithm: pack largest items first\r\n bins: list[list[tuple[Path, int, tuple[Path, Path, Path, dict]]]] = []\r\n bin_remaining: list[int] = []\r\n \r\n for seq_dir, estimated_ram, work_item in work_items_with_ram:\r\n # Try to fit in existing bin\r\n placed = False\r\n for i, remaining in enumerate(bin_remaining):\r\n if remaining >= estimated_ram:\r\n bins[i].append((seq_dir, estimated_ram, work_item))\r\n bin_remaining[i] -= estimated_ram\r\n placed = True\r\n break\r\n \r\n # If doesn't fit, create new bin\r\n if not placed:\r\n bins.append([(seq_dir, estimated_ram, work_item)])\r\n bin_remaining.append(compression_ram - estimated_ram)\r\n \r\n # Worker count is number of bins\r\n worker_count = len(bins)\r\n \r\n # Cap at number of actual work items (can't have more workers than jobs)\r\n if num_work_items > 0:\r\n worker_count = min(worker_count, num_work_items)\r\n \r\n # Respect user's --jobs if provided\r\n if requested and requested > 0:\r\n worker_count = min(worker_count, requested)\r\n \r\n if verbose:\r\n ram_gb = available_ram / (1024 ** 3)\r\n compression_ram_gb = compression_ram / (1024 ** 3)\r\n total_estimated_gb = total_estimated_ram / (1024 ** 3)\r\n log(\r\n \"zip\",\r\n f\"RAM: {ram_gb:.1f}GB available (80% of total), {compression_ram_gb:.1f}GB for compression (95%)\",\r\n verbose_only=True,\r\n verbose=verbose\r\n )\r\n log(\r\n \"zip\",\r\n f\"Estimated RAM per sequence: {total_estimated_gb:.1f}GB total across {num_work_items} sequences\",\r\n verbose_only=True,\r\n verbose=verbose\r\n )\r\n if total_estimated_ram <= compression_ram:\r\n log(\r\n \"zip\",\r\n f\"All sequences fit in RAM → using {worker_count} workers (one per sequence)\",\r\n verbose_only=True,\r\n verbose=verbose\r\n )\r\n else:\r\n log(\r\n \"zip\",\r\n f\"Using bin-packing: {worker_count} workers needed\",\r\n verbose_only=True,\r\n verbose=verbose\r\n )\r\n log(\r\n \"zip\",\r\n f\"Final worker count: {worker_count} (requested: {requested})\",\r\n verbose_only=True,\r\n verbose=verbose\r\n )\r\n \r\n # Return worker count and RAM limits dict\r\n return (worker_count, ram_limits_dict)\r\n \r\n # RAM detection failed, use a safe default (no CPU limit)\r\n default_limit = 4\r\n num_work_items = len(work_items) if work_items else 0\r\n if num_work_items > 0:\r\n default_limit = min(default_limit, num_work_items)\r\n if requested and requested > 0:\r\n default_limit = min(default_limit, requested)\r\n \r\n # Create RAM limits dict with safe defaults (12GB per job for 1GB dict)\r\n ram_limits_dict: dict[Path, int] = {}\r\n if work_items:\r\n default_ram = 12 * 1024 * 1024 * 1024 # 12GB default\r\n for seq_dir, _, _, _ in work_items:\r\n ram_limits_dict[seq_dir] = default_ram\r\n \r\n if verbose:\r\n log(\r\n \"zip\",\r\n f\"RAM detection failed and Max7zInst not set, using default worker limit → \"\r\n f\"work items: {num_work_items}, requested: {requested}, final: {default_limit}\",\r\n verbose_only=True,\r\n verbose=verbose\r\n )\r\n return (default_limit, ram_limits_dict)\r\n \r\n # For zip compression, use existing estimation-based approach\r\n # Estimate RAM per job for each work item\r\n ram_estimates = []\r\n ram_limits_dict: dict[Path, int] = {}\r\n for seq_dir, zip_path, state_path, seq_state in work_items:\r\n try:\r\n estimated_ram = estimate_ram_per_job(seq_dir, seq_state)\r\n ram_estimates.append(estimated_ram)\r\n ram_limits_dict[seq_dir] = estimated_ram\r\n except Exception:\r\n # If estimation fails, use fallback estimate\r\n fallback_ram = 1024 * 1024 * 1024 # 1GB fallback for zip\r\n ram_estimates.append(fallback_ram)\r\n ram_limits_dict[seq_dir] = fallback_ram\r\n \r\n if not ram_estimates:\r\n return (cpu_limit, {})\r\n \r\n max_ram_per_job = max(ram_estimates)\r\n ram_limit = max(1, available_ram // max_ram_per_job)\r\n ram_limit = min(ram_limit, 6) # Conservative limit for zip\r\n final_limit = min(cpu_limit, ram_limit)\r\n \r\n if verbose:\r\n ram_gb = available_ram / (1024 ** 3)\r\n max_ram_gb = max_ram_per_job / (1024 ** 3)\r\n log(\r\n \"zip\",\r\n f\"RAM: {ram_gb:.1f}GB available, ~{max_ram_gb:.1f}GB per job → \"\r\n f\"RAM limit: {ram_limit}, CPU limit: {cpu_limit}, final: {final_limit}\",\r\n verbose_only=True,\r\n verbose=verbose\r\n )\r\n \r\n return (final_limit, ram_limits_dict)\r\n\r\n\r\ndef is_archive_path(path: Path) -> bool:\r\n return any(part in (\"_archive\", \"_CURRENT\") for part in path.parts)\r\n\r\n\r\ndef find_sequence_dirs(root: Path) -> Iterator[Path]:\r\n for dirpath, dirnames, filenames in os.walk(root):\r\n path = Path(dirpath)\r\n dirnames[:] = [d for d in dirnames if d not in (\"_archive\", \"_CURRENT\")]\r\n if is_archive_path(path):\r\n continue\r\n has_frames = any(Path(dirpath, f).suffix.lower() in SEQUENCE_EXTENSIONS for f in filenames)\r\n if has_frames:\r\n yield path\r\n\r\n\r\ndef iter_sequence_files(seq_dir: Path) -> Iterator[Path]:\r\n for dirpath, dirnames, filenames in os.walk(seq_dir):\r\n path = Path(dirpath)\r\n dirnames[:] = [d for d in dirnames if d not in (\"_archive\", \"_CURRENT\")]\r\n if is_archive_path(path):\r\n continue\r\n for filename in filenames:\r\n yield path / filename\r\n\r\n\r\ndef compute_state(seq_dir: Path) -> dict:\r\n entries = []\r\n files = sorted(\r\n iter_sequence_files(seq_dir),\r\n key=lambda p: p.relative_to(seq_dir).as_posix(),\r\n )\r\n for file_path in files:\r\n stat = file_path.stat()\r\n entries.append(\r\n {\r\n \"path\": file_path.relative_to(seq_dir).as_posix(),\r\n \"size\": stat.st_size,\r\n \"mtime_ns\": stat.st_mtime_ns,\r\n }\r\n )\r\n return {\"files\": entries}\r\n\r\n\r\ndef current_state(seq_dir: Path) -> dict:\r\n if not seq_dir.exists() or not seq_dir.is_dir():\r\n return {\"files\": []}\r\n return compute_state(seq_dir)\r\n\r\n\r\ndef load_state(state_path: Path) -> dict | None:\r\n if not state_path.exists():\r\n return None\r\n try:\r\n return json.loads(state_path.read_text())\r\n except json.JSONDecodeError:\r\n return None\r\n\r\n\r\ndef state_changed(seq_state: dict, stored_state: dict | None) -> bool:\r\n if stored_state is None:\r\n return True\r\n return seq_state != stored_state\r\n\r\n\r\ndef archive_path_for(seq_dir: Path) -> Path:\r\n rel = seq_dir.relative_to(RENDER_ROOT)\r\n suffix = \".7z\" if ZIPPER_TYPE == \"7z\" else \".zip\"\r\n return (ARCHIVE_ROOT / rel).with_suffix(suffix)\r\n\r\n\r\ndef sequence_dir_for(zip_path: Path) -> Path:\r\n rel = zip_path.relative_to(ARCHIVE_ROOT)\r\n return (RENDER_ROOT / rel).with_suffix(\"\")\r\n\r\n\r\ndef state_path_for(zip_path: Path) -> Path:\r\n return zip_path.with_suffix(zip_path.suffix + STATE_SUFFIX)\r\n\r\n\r\ndef zip_sequence(seq_dir: Path, zip_path: Path, per_job_memory_limit: int | None = None, worker_count: int = 1, *, verbose: bool = False) -> None:\r\n if ZIPPER_TYPE == \"7z\":\r\n if SEVEN_Z_EXE is None:\r\n raise RuntimeError(\r\n \"7z compression requested but 7z executable not found in PATH. \"\r\n \"Please install 7z (e.g., via Chocolatey: choco install 7zip) \"\r\n \"or set zipper to 'zip' in config.json\"\r\n )\r\n zip_path.parent.mkdir(parents=True, exist_ok=True)\r\n \r\n # If creating a .7z file, remove any existing .zip file for the same sequence\r\n if zip_path.suffix == \".7z\":\r\n old_zip_path = zip_path.with_suffix(\".zip\")\r\n if old_zip_path.exists():\r\n old_zip_path.unlink(missing_ok=True)\r\n old_state_path = state_path_for(old_zip_path)\r\n if old_state_path.exists():\r\n old_state_path.unlink(missing_ok=True)\r\n \r\n # Build list of files to archive with relative paths\r\n file_list = []\r\n for file_path in iter_sequence_files(seq_dir):\r\n rel_path = file_path.relative_to(seq_dir).as_posix()\r\n file_list.append(rel_path)\r\n \r\n if not file_list:\r\n raise RuntimeError(f\"No files found to archive in {seq_dir}\")\r\n \r\n # Create zip in temporary location first to avoid issues with corrupted existing files\r\n temp_zip = None\r\n list_file_path = None\r\n try:\r\n # Create temporary archive file path (but don't create the file - let 7z create it)\r\n temp_zip_path = tempfile.mktemp(suffix=\".7z\", dir=zip_path.parent)\r\n temp_zip = Path(temp_zip_path)\r\n \r\n # Create list file with absolute path\r\n fd, temp_path = tempfile.mkstemp(suffix=\".lst\", text=True)\r\n list_file_path = Path(temp_path)\r\n with os.fdopen(fd, \"w\", encoding=\"utf-8\") as list_file:\r\n for rel_path in file_list:\r\n list_file.write(rel_path + \"\\n\")\r\n list_file.flush()\r\n os.fsync(list_file.fileno()) # Ensure data is written to disk\r\n # File is closed here by context manager, small delay to ensure OS releases handle\r\n time.sleep(0.1)\r\n \r\n # Use absolute paths for both list file and temp zip\r\n list_file_abs = list_file_path.resolve()\r\n temp_zip_abs = temp_zip.resolve()\r\n # Create archive in temp location first (7z will create it fresh)\r\n cmd = [\r\n SEVEN_Z_EXE,\r\n \"a\",\r\n \"-y\",\r\n \"-bb0\", # Suppress progress output\r\n f\"-mx={COMPRESSION_LEVEL}\",\r\n \"-t7z\", # Use 7z format, not zip\r\n ]\r\n \r\n # Set compression method and memory/dictionary size based on method\r\n # PPMd uses memory parameter in method string, LZMA2 uses -md flag\r\n FIXED_DICT_SIZE_MB = 1024\r\n if COMPRESSION_METHOD == \"PPMd\":\r\n # PPMd: specify memory as part of method string\r\n cmd.append(f\"-m0=PPMd:mem={FIXED_DICT_SIZE_MB}m\")\r\n elif COMPRESSION_METHOD == \"LZMA2\":\r\n # LZMA2: use -md for dictionary size\r\n cmd.append(\"-m0=LZMA2\")\r\n cmd.append(f\"-md={FIXED_DICT_SIZE_MB}m\")\r\n elif COMPRESSION_METHOD == \"BZip2\":\r\n # BZip2: use -md for dictionary size (smaller max: 900KB)\r\n max_bzip2_dict = min(FIXED_DICT_SIZE_MB, 900) # BZip2 max is 900KB\r\n cmd.append(\"-m0=BZip2\")\r\n cmd.append(f\"-md={max_bzip2_dict}k\")\r\n elif COMPRESSION_METHOD == \"Deflate\":\r\n # Deflate: doesn't use dictionary size parameter\r\n cmd.append(\"-m0=Deflate\")\r\n else:\r\n # Fallback: use LZMA2\r\n cmd.append(\"-m0=LZMA2\")\r\n cmd.append(f\"-md={FIXED_DICT_SIZE_MB}m\")\r\n \r\n # CPU thread allocation: when there's only 1 worker, use all CPU cores\r\n # When there are multiple workers, use auto mode to let 7z decide\r\n # Note: PPMd is single-threaded and won't benefit from -mmt\r\n cpu_cores = os.cpu_count() or 1\r\n if COMPRESSION_METHOD == \"PPMd\":\r\n # PPMd is single-threaded, so -mmt won't help\r\n # But we can still set it for consistency\r\n cmd.append(\"-mmt=on\")\r\n elif worker_count == 1:\r\n # Single worker: use all CPU cores for maximum speed (LZMA2, BZip2, Deflate support this)\r\n cmd.append(f\"-mmt={cpu_cores}\")\r\n else:\r\n # Multiple workers: use auto mode (7z will manage threads)\r\n cmd.append(\"-mmt=on\")\r\n \r\n cmd.extend([\r\n str(temp_zip_abs),\r\n f\"@{list_file_abs}\",\r\n ])\r\n \r\n # Log the command in verbose mode for debugging\r\n if verbose:\r\n cmd_str = \" \".join(cmd)\r\n log(\"zip\", f\"7z command: {cmd_str}\", verbose_only=True, verbose=verbose)\r\n \r\n result = subprocess.run(\r\n cmd,\r\n cwd=seq_dir,\r\n check=False,\r\n stdout=subprocess.PIPE,\r\n stderr=subprocess.PIPE,\r\n text=True,\r\n )\r\n if result.returncode != 0:\r\n error_msg = result.stderr.strip() if result.stderr else \"Unknown error\"\r\n if result.stdout:\r\n error_msg += f\"\\nstdout: {result.stdout.strip()}\"\r\n raise RuntimeError(f\"7z compression failed: {error_msg}\")\r\n \r\n # Move temp zip to final location, replacing any existing file\r\n if zip_path.exists():\r\n zip_path.unlink()\r\n temp_zip.replace(zip_path)\r\n temp_zip = None # Mark as moved so we don't delete it\r\n finally:\r\n # Clean up temp zip if it wasn't moved\r\n if temp_zip and temp_zip.exists():\r\n try:\r\n temp_zip.unlink(missing_ok=True)\r\n except OSError:\r\n pass\r\n # Clean up list file, with retry in case 7z still has it open\r\n if list_file_path and list_file_path.exists():\r\n for attempt in range(3):\r\n try:\r\n list_file_path.unlink(missing_ok=True)\r\n break\r\n except PermissionError:\r\n if attempt < 2:\r\n time.sleep(0.1) # Wait 100ms before retry\r\n else:\r\n # Last attempt failed, just log and continue\r\n # The temp file will be cleaned up by the OS eventually\r\n pass\r\n return\r\n\r\n # Use zipfile (only if ZIPPER_TYPE == \"zip\")\r\n if ZIPPER_TYPE == \"zip\":\r\n from zipfile import ZIP_DEFLATED, ZIP_STORED, ZipFile\r\n\r\n zip_path.parent.mkdir(parents=True, exist_ok=True)\r\n if COMPRESSION_LEVEL <= 0:\r\n compression = ZIP_STORED\r\n zip_kwargs = {}\r\n else:\r\n compression = ZIP_DEFLATED\r\n zip_kwargs = {\"compresslevel\": COMPRESSION_LEVEL}\r\n\r\n with ZipFile(zip_path, \"w\", compression=compression, **zip_kwargs) as archive:\r\n for file_path in iter_sequence_files(seq_dir):\r\n archive.write(file_path, arcname=file_path.relative_to(seq_dir).as_posix())\r\n return\r\n\r\n # Unknown ZIPPER_TYPE - fail with clear error\r\n raise RuntimeError(\r\n f\"Unsupported ZIPPER_TYPE: {ZIPPER_TYPE!r}. \"\r\n f\"Expected '7z' or 'zip'. \"\r\n f\"Config zipper value: {CONFIG.get('zipper', 'not set')!r}\"\r\n )\r\n\r\n\r\ndef expand_sequence(zip_path: Path, seq_state: dict) -> None:\r\n target_dir = sequence_dir_for(zip_path)\r\n if target_dir.exists():\r\n shutil.rmtree(target_dir)\r\n target_dir.mkdir(parents=True, exist_ok=True)\r\n\r\n if ZIPPER_TYPE == \"7z\":\r\n if SEVEN_Z_EXE is None:\r\n raise RuntimeError(\r\n \"7z extraction requested but 7z executable not found in PATH. \"\r\n \"Please install 7z or set zipper to 'zip' in config.json\"\r\n )\r\n cmd = [\r\n SEVEN_Z_EXE,\r\n \"x\",\r\n \"-y\",\r\n str(zip_path),\r\n f\"-o{target_dir}\",\r\n ]\r\n result = subprocess.run(\r\n cmd,\r\n check=False,\r\n stdout=subprocess.PIPE,\r\n stderr=subprocess.PIPE,\r\n text=True,\r\n )\r\n if result.returncode != 0:\r\n error_msg = result.stderr.strip() if result.stderr else \"Unknown error\"\r\n if result.stdout:\r\n error_msg += f\"\\nstdout: {result.stdout.strip()}\"\r\n raise RuntimeError(f\"7z extraction failed: {error_msg}\")\r\n elif ZIPPER_TYPE == \"zip\":\r\n from zipfile import ZipFile\r\n\r\n with ZipFile(zip_path, \"r\") as archive:\r\n archive.extractall(target_dir)\r\n else:\r\n raise RuntimeError(\r\n f\"Unsupported ZIPPER_TYPE: {ZIPPER_TYPE!r}. \"\r\n f\"Expected '7z' or 'zip'. \"\r\n f\"Config zipper value: {CONFIG.get('zipper', 'not set')!r}\"\r\n )\r\n\r\n for entry in seq_state.get(\"files\", []):\r\n file_path = target_dir / entry[\"path\"]\r\n if file_path.exists():\r\n os.utime(file_path, ns=(entry[\"mtime_ns\"], entry[\"mtime_ns\"]))\r\n\r\n\r\ndef process_zip(seq_dir: Path, zip_path: Path, state_path: Path, seq_state: dict, per_job_memory_limit: int | None, worker_count: int, *, verbose: bool) -> Sequence[Path]:\r\n log(\"zip\", f\"{seq_dir} -> {zip_path}\", verbose_only=True, verbose=verbose)\r\n zip_sequence(seq_dir, zip_path, per_job_memory_limit, worker_count, verbose=verbose)\r\n state_path.write_text(json.dumps(seq_state, indent=2))\r\n return (zip_path, state_path)\r\n\r\n\r\ndef process_expand(zip_path: Path, state: dict, *, verbose: bool) -> None:\r\n log(\"expand\", f\"{zip_path} -> {sequence_dir_for(zip_path)}\", verbose_only=True, verbose=verbose)\r\n expand_sequence(zip_path, state)\r\n\r\n\r\ndef run_zip(requested_workers: int | None, *, verbose: bool) -> int:\r\n work_items: list[tuple[Path, Path, Path, dict]] = []\r\n log(\"init\", f\"Scanning sequences under {RENDER_ROOT.resolve()}\")\r\n total_scanned = 0\r\n quick_skipped = 0\r\n state_skipped = 0\r\n empty_dirs = 0\r\n queued = 0\r\n\r\n if RENDER_ROOT.exists():\r\n for seq_dir in find_sequence_dirs(RENDER_ROOT):\r\n total_scanned += 1\r\n rel = seq_dir.relative_to(RENDER_ROOT)\r\n if total_scanned <= 5 or total_scanned % 10 == 0:\r\n log(\"scan\", f\"[{total_scanned}] Inspecting {rel}\")\r\n\r\n # Get the target archive path (will be .7z if ZIPPER_TYPE is \"7z\")\r\n zip_path = archive_path_for(seq_dir)\r\n state_path = state_path_for(zip_path)\r\n \r\n # Quick check: if archive exists, load stored state first (fast)\r\n stored_state = load_state(state_path)\r\n \r\n # Check if we need to upgrade from .zip to .7z\r\n old_zip_path = None\r\n old_stored_state = None\r\n if ZIPPER_TYPE == \"7z\":\r\n old_zip_path = zip_path.with_suffix(\".zip\")\r\n if old_zip_path.exists():\r\n old_state_path = state_path_for(old_zip_path)\r\n old_stored_state = load_state(old_state_path)\r\n # If old .zip exists and .7z doesn't, use old .zip's state for comparison\r\n if not zip_path.exists() and old_stored_state is not None:\r\n stored_state = old_stored_state\r\n \r\n # If .7z archive exists and we have stored state, do quick check before computing full state\r\n if zip_path.exists() and stored_state is not None:\r\n # Quick check: if directory mtime is older than archive, likely unchanged\r\n try:\r\n dir_mtime = seq_dir.stat().st_mtime_ns\r\n archive_mtime = zip_path.stat().st_mtime_ns\r\n # If directory wasn't modified since archive was created, skip state computation\r\n if dir_mtime <= archive_mtime:\r\n quick_skipped += 1\r\n if quick_skipped <= 5:\r\n log(\"scan\", f\"Skipping {rel} (unchanged since archive)\")\r\n # Still need to check for old .zip cleanup (we have .7z, so .zip is obsolete)\r\n if old_zip_path and old_zip_path.exists():\r\n old_zip_path.unlink(missing_ok=True)\r\n old_state_path = state_path_for(old_zip_path)\r\n if old_state_path.exists():\r\n old_state_path.unlink(missing_ok=True)\r\n continue\r\n except OSError:\r\n # If stat fails, fall through to full state computation\r\n pass\r\n \r\n # Compute current state only if we need to\r\n seq_state = compute_state(seq_dir)\r\n if not seq_state[\"files\"]:\r\n empty_dirs += 1\r\n if empty_dirs <= 5:\r\n log(\"scan\", f\"{rel} has no files; skipping\")\r\n continue\r\n \r\n # Check if state changed\r\n if stored_state is not None and not state_changed(seq_state, stored_state):\r\n # Metadata matches stored state\r\n state_skipped += 1\r\n if state_skipped <= 5:\r\n log(\"scan\", f\"{rel} metadata unchanged; archive up to date\")\r\n if zip_path.exists():\r\n # .7z exists and is up to date, clean up old .zip if it exists\r\n if old_zip_path and old_zip_path.exists():\r\n old_zip_path.unlink(missing_ok=True)\r\n old_state_path = state_path_for(old_zip_path)\r\n if old_state_path.exists():\r\n old_state_path.unlink(missing_ok=True)\r\n elif old_zip_path and old_zip_path.exists() and old_stored_state is not None:\r\n # .7z doesn't exist, but .zip exists and metadata matches\r\n # Keep the .zip file, don't create .7z\r\n continue\r\n else:\r\n # No archive exists, but state matches (shouldn't happen, but be safe)\r\n continue\r\n\r\n work_items.append((seq_dir, zip_path, state_path, seq_state))\r\n queued += 1\r\n if queued <= 5 or queued % 5 == 0:\r\n total_bytes = sum(entry.get(\"size\", 0) for entry in seq_state.get(\"files\", []))\r\n size_gb = total_bytes / (1024 ** 3)\r\n log(\"scan\", f\"Queued {rel} for compression (~{size_gb:.2f}GB) [{queued} total]\")\r\n\r\n if not work_items:\r\n if not RENDER_ROOT.exists():\r\n log(\"zip\", \"Render root 'Renders' not found; nothing to zip.\")\r\n else:\r\n log(\"zip\", \"Archives already up to date; no sequences needed zipping.\")\r\n log(\r\n \"scan\",\r\n f\"Summary: scanned {total_scanned}, quick-skipped {quick_skipped}, \"\r\n f\"state-skipped {state_skipped}, empty {empty_dirs}, queued {queued}\",\r\n )\r\n removed = cleanup_orphan_archives(verbose=verbose)\r\n if removed:\r\n log(\"zip\", f\"Removed {removed} orphan archive(s).\", verbose=verbose)\r\n return 0\r\n\r\n # Calculate RAM-aware worker count based on work items\r\n worker_count, ram_limits_dict = max_workers(requested_workers, work_items, verbose=verbose)\r\n log(\r\n \"init\",\r\n f\"Preparing to compress {len(work_items)} sequence(s) with {worker_count} worker(s)\",\r\n )\r\n\r\n updated_paths: list[Path] = []\r\n\r\n total = len(work_items)\r\n completed = 0\r\n\r\n with ThreadPoolExecutor(max_workers=worker_count) as executor:\r\n future_map = {\r\n executor.submit(process_zip, seq_dir, zip_path, state_path, seq_state, ram_limits_dict.get(seq_dir), worker_count, verbose=verbose): seq_dir\r\n for seq_dir, zip_path, state_path, seq_state in work_items\r\n }\r\n\r\n for future in as_completed(future_map):\r\n updated_paths.extend(future.result())\r\n completed += 1\r\n seq_dir = future_map[future]\r\n rel = seq_dir.relative_to(RENDER_ROOT)\r\n log(\"zip\", f\"{completed}/{total} {rel}\")\r\n\r\n updated_count = len(updated_paths) // 2\r\n log(\"zip\", f\"Updated {updated_count} sequence archive(s).\", verbose=verbose)\r\n if updated_paths:\r\n log(\r\n \"zip\",\r\n \"Archives updated. Stage manually with `git add Renders/_zipped`, if desired.\",\r\n verbose_only=True,\r\n verbose=verbose,\r\n )\r\n log(\r\n \"scan\",\r\n f\"Summary: scanned {total_scanned}, quick-skipped {quick_skipped}, \"\r\n f\"state-skipped {state_skipped}, empty {empty_dirs}, queued {queued}\",\r\n )\r\n\r\n removed = cleanup_orphan_archives(verbose=verbose)\r\n if removed:\r\n log(\"zip\", f\"Removed {removed} orphan archive(s).\", verbose=verbose)\r\n\r\n return updated_count\r\n\r\n\r\ndef run_expand(worker_count: int, *, verbose: bool) -> int:\r\n if not ARCHIVE_ROOT.exists():\r\n log(\"expand\", \"No archives to expand (missing 'Renders/_zipped').\")\r\n return 0\r\n\r\n work_items: list[tuple[Path, dict]] = []\r\n\r\n # Look for both .zip and .7z archives\r\n archive_patterns = [\"*.zip\", \"*.7z\"]\r\n for pattern in archive_patterns:\r\n for zip_path in ARCHIVE_ROOT.rglob(pattern):\r\n state_path = state_path_for(zip_path)\r\n seq_state = load_state(state_path)\r\n if seq_state is None:\r\n log(\"expand\", f\"Skipping {zip_path} (missing metadata)\")\r\n continue\r\n\r\n target_dir = sequence_dir_for(zip_path)\r\n if current_state(target_dir) == seq_state:\r\n continue\r\n\r\n work_items.append((zip_path, seq_state))\r\n\r\n if not work_items:\r\n log(\"expand\", \"Working folders already match archives; nothing to expand.\")\r\n return 0\r\n\r\n total = len(work_items)\r\n completed = 0\r\n\r\n with ThreadPoolExecutor(max_workers=worker_count) as executor:\r\n future_map = {\r\n executor.submit(process_expand, zip_path, seq_state, verbose=verbose): zip_path\r\n for zip_path, seq_state in work_items\r\n }\r\n\r\n for future in as_completed(future_map):\r\n future.result()\r\n completed += 1\r\n zip_path = future_map[future]\r\n rel = zip_path.relative_to(ARCHIVE_ROOT)\r\n log(\"expand\", f\"{completed}/{total} {rel}\")\r\n\r\n log(\"expand\", f\"Refreshed {len(work_items)} sequence folder(s).\", verbose=verbose)\r\n return len(work_items)\r\n\r\n\r\ndef cleanup_orphan_archives(*, verbose: bool) -> int:\r\n if not ARCHIVE_ROOT.exists():\r\n log(\"zip\", \"Archive root does not exist; nothing to clean up.\", verbose_only=True, verbose=verbose)\r\n return 0\r\n\r\n removed: list[Path] = []\r\n \r\n log(\"zip\", f\"Scanning for orphan archives in {ARCHIVE_ROOT.resolve()}\", verbose_only=True, verbose=verbose)\r\n\r\n # Look for both .zip and .7z archives\r\n archive_patterns = [\"*.zip\", \"*.7z\"]\r\n for pattern in archive_patterns:\r\n try:\r\n for zip_path in ARCHIVE_ROOT.rglob(pattern):\r\n try:\r\n # Resolve to absolute paths for consistent checking\r\n zip_path_abs = zip_path.resolve()\r\n \r\n # Calculate state path BEFORE checking/removing archive\r\n state_path = state_path_for(zip_path)\r\n state_path_abs = state_path.resolve()\r\n \r\n # Calculate sequence directory using sequence_dir_for\r\n # This function works with paths relative to ARCHIVE_ROOT\r\n seq_dir = sequence_dir_for(zip_path)\r\n seq_dir_abs = seq_dir.resolve()\r\n \r\n # Check if sequence directory exists and is actually a directory\r\n if seq_dir_abs.exists() and seq_dir_abs.is_dir():\r\n log(\"zip\", f\"Archive {zip_path.relative_to(ARCHIVE_ROOT)} has matching sequence directory; keeping\", verbose_only=True, verbose=verbose)\r\n continue\r\n\r\n # Sequence directory doesn't exist - this is an orphan archive\r\n rel = zip_path.relative_to(ARCHIVE_ROOT)\r\n log(\"zip\", f\"Removing orphan archive {rel}\", verbose_only=False, verbose=verbose)\r\n\r\n # Remove archive file\r\n if zip_path_abs.exists():\r\n zip_path_abs.unlink()\r\n log(\"zip\", f\"Deleted archive: {rel}\", verbose_only=True, verbose=verbose)\r\n \r\n # Remove state file if it exists\r\n if state_path_abs.exists():\r\n state_path_abs.unlink()\r\n state_rel = state_path.relative_to(ARCHIVE_ROOT)\r\n log(\"zip\", f\"Removed orphan metadata {state_rel}\", verbose_only=False, verbose=verbose)\r\n \r\n removed.append(zip_path_abs)\r\n except Exception as e:\r\n # Log error but continue processing other archives\r\n try:\r\n rel = zip_path.relative_to(ARCHIVE_ROOT)\r\n except:\r\n rel = zip_path\r\n log(\"zip\", f\"Error processing archive {rel}: {e}\", verbose_only=True, verbose=verbose)\r\n log(\"zip\", f\"Traceback: {traceback.format_exc()}\", verbose_only=True, verbose=verbose)\r\n continue\r\n except Exception as e:\r\n log(\"zip\", f\"Error scanning for {pattern} archives: {e}\", verbose_only=True, verbose=verbose)\r\n log(\"zip\", f\"Traceback: {traceback.format_exc()}\", verbose_only=True, verbose=verbose)\r\n continue\r\n\r\n if not removed:\r\n log(\"zip\", \"No orphan archives found.\", verbose_only=True, verbose=verbose)\r\n return 0\r\n\r\n # Clean up empty parent directories\r\n archive_root_abs = ARCHIVE_ROOT.resolve()\r\n for parent in sorted({p.resolve().parent for p in removed}, key=lambda p: len(p.parts), reverse=True):\r\n parent_resolved = parent.resolve()\r\n if not parent_resolved.exists():\r\n continue\r\n try:\r\n while parent_resolved != archive_root_abs and not any(parent_resolved.iterdir()):\r\n parent_resolved.rmdir()\r\n parent_resolved = parent_resolved.parent.resolve()\r\n except OSError:\r\n # Ignore errors when removing directories (might be in use)\r\n pass\r\n\r\n return len(removed)\r\n\r\n\r\ndef main() -> int:\r\n args = parse_args()\r\n log(\"init\", \"zip_sequences starting up...\")\r\n log(\"init\", f\"Working directory: {Path.cwd()}\")\r\n log(\"init\", f\"Mode: {args.mode}, zipper: {ZIPPER_TYPE}, jobs arg: {args.jobs or 'auto'}\")\r\n if ZIPPER_TYPE == \"7z\":\r\n exe = SEVEN_Z_EXE or \"not found\"\r\n max_inst = MAX_7Z_INSTANCES if MAX_7Z_INSTANCES is not None else \"auto\"\r\n log(\"init\", f\"7z executable: {exe}, Max7zInst: {max_inst}, method: {COMPRESSION_METHOD}, level: {COMPRESSION_LEVEL}\")\r\n\r\n if args.mode == \"expand\":\r\n # For expand mode, use simple CPU-based worker calculation\r\n workers, _ = max_workers(args.jobs, work_items=None, verbose=args.verbose)\r\n run_expand(workers, verbose=args.verbose)\r\n return 0\r\n\r\n # For zip mode, work items will be calculated in run_zip\r\n updated = run_zip(args.jobs, verbose=args.verbose)\r\n return 0 if updated >= 0 else 1\r\n\r\n\r\nif __name__ == \"__main__\":\r\n try:\r\n raise SystemExit(main())\r\n except Exception as exc: # broad to surface unexpected errors cleanly\r\n print(f\"Sequence sync failed: {exc}\", file=sys.stderr)\r\n raise\r\n\r\n"
}
```
User decision: **accepted**
Status: **completed**
---
Thought Process
**Running lint checks**
---
Tool use: **read_lints** • Read lints for 1 file
Lint paths:
- `zip_sequences.py`
Lint results
**No lint errors found**
---
Thought Process
**Preparing update about cleanup fix**
---
Ran into the root cause: `cleanup_orphan_archives()` never got called whenever there were **zero** sequences queued for zipping. That’s exactly the “archives already up to date” path you hit, so orphan zips were skipped entirely.
Updated `run_zip()` so the cleanup step always runs (and logs removals) even when nothing needed zipping. No other behavioral changes.
Please re-run `ZipSeqArchv.bat`—you should now see the orphan archive removal messages right after the scan summary, and the stale `.7z`/`.meta.json` files will be deleted.
---
_**User (2025-11-25 19:57Z)**_
aha, perfect!
PS P:\250827_FestivalTurf\Renders> P:\250827_FestivalTurf\Renders\ZipSeqArchv.bat
[init] Loading configuration sources...
[init] Reading project config at P:\250827_FestivalTurf\.config\config.json
[init] Configuration loaded from project
[init] zip_sequences starting up...
[init] Working directory: P:\250827_FestivalTurf
[init] Mode: zip, zipper: 7z, jobs arg: auto
[init] 7z executable: C:\ProgramData\chocolatey\bin\7z.EXE, Max7zInst: auto, method: LZMA2, level: 5
[init] Scanning sequences under \\NEXUS\proj\250827_FestivalTurf\Renders
[scan] [1] Inspecting 01 Develop Your Turf Plan\01_Blueprint
[scan] 01 Develop Your Turf Plan\01_Blueprint metadata unchanged; archive up to date
[scan] [2] Inspecting 01 Develop Your Turf Plan\01_Blueprint_insert
[scan] 01 Develop Your Turf Plan\01_Blueprint_insert metadata unchanged; archive up to date
[scan] [3] Inspecting 01 Develop Your Turf Plan\02_Roll Sizes
[scan] 01 Develop Your Turf Plan\02_Roll Sizes metadata unchanged; archive up to date
[scan] [4] Inspecting 01 Develop Your Turf Plan\03_Window
[scan] 01 Develop Your Turf Plan\03_Window metadata unchanged; archive up to date
[scan] [5] Inspecting 01 Develop Your Turf Plan\04_Dog
[scan] 01 Develop Your Turf Plan\04_Dog metadata unchanged; archive up to date
[scan] [10] Inspecting 02 Preparing the Base\01_intro
[scan] [20] Inspecting 02 Preparing the Base\10_FT_Talking
[scan] [30] Inspecting 03 Roll Out And Stretch\05_diagram
[scan] Skipping 04 Securing Your Seam\02_FT talking (unchanged since archive)
[scan] Skipping 04 Securing Your Seam\02_FT talking_Pt2 (unchanged since archive)
[scan] [40] Inspecting 04 Securing Your Seam\02_S seam insert
[scan] Skipping 04 Securing Your Seam\04_holding nails (unchanged since archive)
[scan] Skipping 04 Securing Your Seam\09_ending (unchanged since archive)
[scan] [50] Inspecting 04 Securing Your Seam\Nail_dimpling_example
[scan] Skipping 05 Stretch Cut Nail\01_intro (unchanged since archive)
[scan] [60] Inspecting 05 Stretch Cut Nail\Visual 10
[scan] [70] Inspecting 06 Infill And Powerbrooming\Visual 2B
[scan] [80] Inspecting blooper_reel\weird\ft_cutting_turf
[zip] Archives already up to date; no sequences needed zipping.
[scan] Summary: scanned 81, quick-skipped 35, state-skipped 46, empty 0, queued 0
[zip] Scanning for orphan archives in \\NEXUS\proj\250827_FestivalTurf\Renders\_zipped
[zip] Archive 01 Develop Your Turf Plan\01_Blueprint.zip has matching sequence directory; keeping
[zip] Archive 01 Develop Your Turf Plan\01_Blueprint_insert.zip has matching sequence directory; keeping
[zip] Archive 01 Develop Your Turf Plan\02_Roll Sizes.zip has matching sequence directory; keeping
[zip] Archive 01 Develop Your Turf Plan\03_Window.zip has matching sequence directory; keeping
[zip] Archive 01 Develop Your Turf Plan\04_Dog.zip has matching sequence directory; keeping
[zip] Archive 01 Develop Your Turf Plan\04_FT.zip has matching sequence directory; keeping
[zip] Archive 01 Develop Your Turf Plan\04_Kid.zip has matching sequence directory; keeping
[zip] Archive 01 Develop Your Turf Plan\05_Call for Help.zip has matching sequence directory; keeping
[zip] Archive 01 Develop Your Turf Plan\06_Closing.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\01_intro.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\02_drainage.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\03_dirt to gravel.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\04_Rock Yard.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\05_raking.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\06_Watering.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\07_compacting.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\08_muscle arms.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\09_level ground.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\10_Final_Layer_v2.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\10_FT_Talking.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\10_pour_final_layer.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\12_base depth.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\13_Cutout.zip has matching sequence directory; keeping
[zip] Archive 02 Preparing the Base\14_end scene.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\01_opening.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\02_sun heating.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\03_check for windows.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\04_grass Lean.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\04_Lean Insert.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\05_diagram.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\05_diagram_pt2.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\06_trim one row.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\06_trim to curb.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\07_anchor explain.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\07_Kick insert.zip has matching sequence directory; keeping
[zip] Archive 03 Roll Out And Stretch\New Ending.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\01_intro.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\02_S seam insert.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\02_straight seam glue insert.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\02_straight seam nail insert.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\03_seam gap.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\03_seam gap_second_half.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\06_07_Nail_insert.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\06_FT_Hammering_insert.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\08_smooth blend.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\Nail_dimpling_example.zip has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\02_FT talking.7z has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\02_FT talking_Pt2.7z has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\04_holding nails.7z has matching sequence directory; keeping
[zip] Archive 04 Securing Your Seam\09_ending.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\01_intro.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\02_kicker insert.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\02_talking.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\03_FT Shuffle.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\03_FT Shuffle_dance_only.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\03_FT Shuffle_Intro.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\04_stretching pattern.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\04_talking.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\05_secure the stretch.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\Visual 10.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\Visual 11.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\Visual 6.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\Visual 6_INSERT.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\Visual 7.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\Visual 8 insert.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\Visual 8.7z has matching sequence directory; keeping
[zip] Archive 05 Stretch Cut Nail\Visual 9.7z has matching sequence directory; keeping
[zip] Archive 06 Infill And Powerbrooming\01_opening.7z has matching sequence directory; keeping
[zip] Archive 06 Infill And Powerbrooming\Visual 2A.7z has matching sequence directory; keeping
[zip] Archive 06 Infill And Powerbrooming\Visual 2B.7z has matching sequence directory; keeping
[zip] Archive 06 Infill And Powerbrooming\Visual 2C.7z has matching sequence directory; keeping
[zip] Archive 06 Infill And Powerbrooming\Visual 2D.7z has matching sequence directory; keeping
[zip] Archive 06 Infill And Powerbrooming\Visual 3A.7z has matching sequence directory; keeping
[zip] Removing orphan archive 06 Infill And Powerbrooming\Visual 3B.7z
[zip] Deleted archive: 06 Infill And Powerbrooming\Visual 3B.7z
[zip] Removed orphan metadata 06 Infill And Powerbrooming\Visual 3B.7z.meta.json
[zip] Removing orphan archive 06 Infill And Powerbrooming\Visual 4A.7z
[zip] Deleted archive: 06 Infill And Powerbrooming\Visual 4A.7z
[zip] Removed orphan metadata 06 Infill And Powerbrooming\Visual 4A.7z.meta.json
[zip] Archive 06 Infill And Powerbrooming\Visual 5.7z has matching sequence directory; keeping
[zip] Removing orphan archive 06 Infill And Powerbrooming\Visual 5B.7z
[zip] Deleted archive: 06 Infill And Powerbrooming\Visual 5B.7z
[zip] Removed orphan metadata 06 Infill And Powerbrooming\Visual 5B.7z.meta.json
[zip] Removing orphan archive 06 Infill And Powerbrooming\Visual 5B_insert.7z
[zip] Deleted archive: 06 Infill And Powerbrooming\Visual 5B_insert.7z
[zip] Removed orphan metadata 06 Infill And Powerbrooming\Visual 5B_insert.7z.meta.json
[zip] Removing orphan archive 06 Infill And Powerbrooming\Visual 6.7z
[zip] Deleted archive: 06 Infill And Powerbrooming\Visual 6.7z
[zip] Removed orphan metadata 06 Infill And Powerbrooming\Visual 6.7z.meta.json
[zip] Archive blooper_reel\ft_cutting_turf_face_zoom.7z has matching sequence directory; keeping
[zip] Archive blooper_reel\ft_cutting_turf_shot_1_behind.7z has matching sequence directory; keeping
[zip] Archive blooper_reel\ft_cutting_turf_shot_1_face.7z has matching sequence directory; keeping
[zip] Archive blooper_reel\ft_cutting_turf_sword_prep.7z has matching sequence directory; keeping
[zip] Archive blooper_reel\ft_cutting_turf_ZOOM_1.7z has matching sequence directory; keeping
[zip] Archive blooper_reel\weird\ft_cutting_turf.7z has matching sequence directory; keeping
[zip] Archive blooper_reel\weird_2\ft_cutting_turf.7z has matching sequence directory; keeping
[zip] Removed 5 orphan archive(s).
---
_**Agent (model default, mode Agent)**_
---
The fix is working as expected. The script now:
1. **Always runs cleanup** — even when no sequences need zipping
2. **Finds orphaned archives** — detected 5 orphaned archives (Visual 3B, Visual 4A, Visual 5B, Visual 5B_insert, Visual 6)
3. **Removes them properly** — deleted both the `.7z` archives and their `.meta.json` files
4. **Reports clearly** — shows what's being removed and provides a summary count
The cleanup function now runs in all cases, not just when sequences are queued for compression. This ensures that when you delete sequences, their corresponding archives and metadata files are automatically cleaned up on the next run of `ZipSeqArchv.bat`.
---
_**User (2025-11-25 21:04Z)**_
another issue: it's ignoring the sequence within this folder:
P:\251119_Tatt2Away_How-it-Works\Renders\scab_v2.1\
---
_**Agent (model default, mode Agent)**_
---
Investigating why that sequence is being ignored. Checking how sequences are detected:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **codebase_search** • Searched codebase "How does find_sequence_dirs detect sequence directories and what conditions might cause a directory to be skipped?" • **14** results
| File | Lines |
|------|-------|
| `zip_sequences.py` | L471-496 |
| `zip_sequences.py` | L783-940 |
| `zip_sequences.py` | L478-514 |
| `UpdateSequences.ps1` | L194-235 |
| `zip_sequences.py` | L785-940 |
| `UpdateSequences.ps1` | L232-276 |
| `zip_sequences.py` | L943-1070 |
| `UpdateSequences.ps1` | L60 |
| `zip_sequences.py` | L553-780 |
| `UpdateSequences.ps1` | L143-200 |
| `zip_sequences.py` | L102 |
| `zip_sequences.py` | L546-724 |
| `zip_sequences.py` | L987-1091 |
| `zip_sequences.py` | L788-1070 |
---
---
Checking the sequence detection logic. `find_sequence_dirs` only detects directories that contain image files directly. If `scab_v2.1` has images only in subdirectories, it may be missed. Verifying and improving the detection:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **read_file** • Read file: zip_sequences.py
---
---
Adding debug logging to see why `scab_v2.1` is being skipped. Improving the detection logic to handle edge cases:
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 17, lines removed: undefined
```diff
@@ -486,4 +486,21 @@
if has_frames:
yield path
+ # Also check subdirectories for sequence files (in case the directory itself has no files)
+ # but only if the directory is not too deep (to avoid scanning archive directories)
+ elif not any(part in ("_archive", "_CURRENT", "_zipped") for part in path.parts):
+ # Check if any immediate subdirectory has sequence files
+ for subdir in dirnames:
+ subdir_path = path / subdir
+ if subdir_path.is_dir():
+ try:
+ subdir_files = list(subdir_path.iterdir())
+ if any(f.is_file() and f.suffix.lower() in SEQUENCE_EXTENSIONS for f in subdir_files):
+ # This directory contains a subdirectory with sequence files
+ # Treat the parent as a sequence directory
+ yield path
+ break
+ except (OSError, PermissionError):
+ # Skip if we can't access the subdirectory
+ continue
```
---
---
Reverting that change and adding debug logging to see why it's being skipped:
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 11, lines removed: 18
```diff
@@ -477,30 +477,23 @@
- def find_sequence_dirs(root: Path) -> Iterator[Path]:
+ def find_sequence_dirs(root: Path, *, verbose: bool = False) -> Iterator[Path]:
for dirpath, dirnames, filenames in os.walk(root):
path = Path(dirpath)
dirnames[:] = [d for d in dirnames if d not in ("_archive", "_CURRENT")]
if is_archive_path(path):
+ if verbose:
+ rel = path.relative_to(root) if path.is_relative_to(root) else path
+ log("scan", f"Skipping archive path: {rel}", verbose_only=True, verbose=verbose)
continue
has_frames = any(Path(dirpath, f).suffix.lower() in SEQUENCE_EXTENSIONS for f in filenames)
if has_frames:
yield path
- # Also check subdirectories for sequence files (in case the directory itself has no files)
- # but only if the directory is not too deep (to avoid scanning archive directories)
- elif not any(part in ("_archive", "_CURRENT", "_zipped") for part in path.parts):
- # Check if any immediate subdirectory has sequence files
- for subdir in dirnames:
- subdir_path = path / subdir
- if subdir_path.is_dir():
- try:
- subdir_files = list(subdir_path.iterdir())
- if any(f.is_file() and f.suffix.lower() in SEQUENCE_EXTENSIONS for f in subdir_files):
- # This directory contains a subdirectory with sequence files
- # Treat the parent as a sequence directory
- yield path
- break
- except (OSError, PermissionError):
- # Skip if we can't access the subdirectory
- continue
+ elif verbose:
+ # Log directories that don't have sequence files for debugging
+ rel = path.relative_to(root) if path.is_relative_to(root) else path
+ if "scab" in path.name.lower() or "v2" in path.name.lower():
+ # Special logging for directories that might be sequences
+ frame_extensions = [f for f in filenames if Path(f).suffix.lower() in SEQUENCE_EXTENSIONS]
+ log("scan", f"Directory {rel} has no sequence files (found {len(filenames)} files, {len(frame_extensions)} with sequence extensions)", verbose_only=True, verbose=verbose)
```
---
---
Updating the call to `find_sequence_dirs` to pass the verbose flag:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 1, lines removed: 1
```diff
@@ -807,5 +807,5 @@
if RENDER_ROOT.exists():
- for seq_dir in find_sequence_dirs(RENDER_ROOT):
+ for seq_dir in find_sequence_dirs(RENDER_ROOT, verbose=verbose):
total_scanned += 1
rel = seq_dir.relative_to(RENDER_ROOT)
```
---
Tool use: **read_lints** • Read lints for 1 file
Lint paths:
- `zip_sequences.py`
Lint results
**No lint errors found**
---
---
Improving detection to handle sequences where images are in subdirectories. Updating the logic:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 1, lines removed: undefined
```diff
@@ -478,4 +478,5 @@
def find_sequence_dirs(root: Path, *, verbose: bool = False) -> Iterator[Path]:
+ seen_dirs = set() # Track directories we've already yielded to avoid duplicates
for dirpath, dirnames, filenames in os.walk(root):
path = Path(dirpath)
```
**Chunk 2**
Lines added: 19, lines removed: 3
```diff
@@ -486,14 +487,30 @@
log("scan", f"Skipping archive path: {rel}", verbose_only=True, verbose=verbose)
continue
+
+ # Check if this directory has sequence files directly
has_frames = any(Path(dirpath, f).suffix.lower() in SEQUENCE_EXTENSIONS for f in filenames)
if has_frames:
- yield path
+ path_resolved = path.resolve()
+ if path_resolved not in seen_dirs:
+ seen_dirs.add(path_resolved)
+ yield path
elif verbose:
# Log directories that don't have sequence files for debugging
rel = path.relative_to(root) if path.is_relative_to(root) else path
- if "scab" in path.name.lower() or "v2" in path.name.lower():
+ if "scab" in path.name.lower():
# Special logging for directories that might be sequences
frame_extensions = [f for f in filenames if Path(f).suffix.lower() in SEQUENCE_EXTENSIONS]
- log("scan", f"Directory {rel} has no sequence files (found {len(filenames)} files, {len(frame_extensions)} with sequence extensions)", verbose_only=True, verbose=verbose)
+ all_files = list(path.iterdir()) if path.exists() else []
+ log("scan", f"Directory {rel}: {len(filenames)} files in dir, {len(frame_extensions)} with sequence extensions, {len(all_files)} total items", verbose_only=True, verbose=verbose)
+ # Check subdirectories
+ for subdir in dirnames[:5]: # Check first 5 subdirs
+ subdir_path = path / subdir
+ try:
+ subdir_files = list(subdir_path.iterdir()) if subdir_path.exists() else []
+ subdir_frame_files = [f for f in subdir_files if f.is_file() and f.suffix.lower() in SEQUENCE_EXTENSIONS]
+ if subdir_frame_files:
+ log("scan", f" Subdirectory {subdir} has {len(subdir_frame_files)} sequence files", verbose_only=True, verbose=verbose)
+ except (OSError, PermissionError):
+ pass
```
---
---
Run `ZipSeqArchv.bat` with `--verbose` to see why `scab_v2.1` is being skipped. The logging will show:
1. Whether the directory is being scanned
2. How many files it contains
3. Whether any subdirectories have sequence files
The current logic only detects directories that have sequence files directly in them. If `scab_v2.1` has images only in subdirectories, those subdirectories are detected as separate sequences, not `scab_v2.1` itself.
If you want `scab_v2.1` to be treated as a single sequence even when images are in subdirectories, we can update the detection logic. Share the verbose output and we can adjust accordingly.
---
_**User (2025-11-25 21:04Z)**_
PS P:\251119_Tatt2Away_How-it-Works\Renders> P:\251119_Tatt2Away_How-it-Works\Renders\ZipSeqArchv.bat
[init] Loading configuration sources...
[init] Reading project config at P:\251119_Tatt2Away_How-it-Works\.config\config.json
[init] Configuration loaded from project
[init] zip_sequences starting up...
[init] Working directory: P:\251119_Tatt2Away_How-it-Works
[init] Mode: zip, zipper: 7z, jobs arg: auto
[init] 7z executable: C:\ProgramData\chocolatey\bin\7z.EXE, Max7zInst: auto, method: LZMA2, level: 5
[init] Scanning sequences under \\NEXUS\proj\251119_Tatt2Away_How-it-Works\Renders
[scan] [1] Inspecting 231016_audio_skin-top_2_CH
[scan] Skipping 231016_audio_skin-top_2_CH (unchanged since archive)
[scan] [2] Inspecting scab_v2.1
[scan] Queued scab_v2.1 for compression (~1.31GB) [1 total]
[scan] [3] Inspecting tatt2away_process_v3
[scan] Queued tatt2away_process_v3 for compression (~0.67GB) [2 total]
[scan] [4] Inspecting Tatt2Away_stages_v2_CM
[scan] Skipping Tatt2Away_stages_v2_CM (unchanged since archive)
[zip] RAM: 102.2GB available (80% of total), 97.1GB for compression (95%)
[zip] Estimated RAM per sequence: 6.0GB total across 2 sequences
[zip] All sequences fit in RAM → using 2 workers (one per sequence)
[zip] Final worker count: 2 (requested: None)
[init] Preparing to compress 2 sequence(s) with 2 worker(s)
[zip] Renders\scab_v2.1 -> Renders\_zipped\scab_v2.7z
[zip] Renders\tatt2away_process_v3 -> Renders\_zipped\tatt2away_process_v3.7z
[zip] 7z command: C:\ProgramData\chocolatey\bin\7z.EXE a -y -bb0 -mx=5 -t7z -m0=LZMA2 -md=1024m -mmt=on \\NEXUS\proj\251119_Tatt2Away_How-it-Works\Renders\_zipped\tmpaxb_br7j.7z @C:\Users\Nathan\AppData\Local\Temp\tmpsim_da0y.lst
[zip] 7z command: C:\ProgramData\chocolatey\bin\7z.EXE a -y -bb0 -mx=5 -t7z -m0=LZMA2 -md=1024m -mmt=on \\NEXUS\proj\251119_Tatt2Away_How-it-Works\Renders\_zipped\tmp2vv7izgy.7z @C:\Users\Nathan\AppData\Local\Temp\tmpv25oxn65.lst
[zip] 1/2 tatt2away_process_v3
[zip] 2/2 scab_v2.1
[zip] Updated 2 sequence archive(s).
[zip] Archives updated. Stage manually with `git add Renders/_zipped`, if desired.
[scan] Summary: scanned 4, quick-skipped 2, state-skipped 0, empty 0, queued 2
[zip] Scanning for orphan archives in \\NEXUS\proj\251119_Tatt2Away_How-it-Works\Renders\_zipped
[zip] Archive 231016_audio_skin-top_2_CH.7z has matching sequence directory; keeping
[zip] Removing orphan archive scab_v2.7z
[zip] Deleted archive: scab_v2.7z
[zip] Removed orphan metadata scab_v2.7z.meta.json
[zip] Archive tatt2away_process_v3.7z has matching sequence directory; keeping
[zip] Archive Tatt2Away_stages_v2_CM.7z has matching sequence directory; keeping
[zip] Removed 1 orphan archive(s).
---
_**Agent (model default, mode Agent)**_
---
Archive path is incorrect: `scab_v2.1` is being archived as `scab_v2.7z` instead of `scab_v2.1.7z`, so the `.1` is dropped. Checking `archive_path_for`:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **read_file** • Read file: zip_sequences.py
---
---
Tool use: **grep** • Grep for "def archive_path_for" in "zip_sequences.py" • 1 match
Output mode: content
| File | Content | Line |
|------|------|------|
| `zip_sequences.py` | `def archive_path_for(seq_dir: Path) -> Path:` | L566 |
---
---
Tool use: **read_file** • Read file: zip_sequences.py
---
---
Tool use: **read_file** • Read file: zip_sequences.py
---
---
---
The issue is that `archive_path_for` uses `.with_suffix()`, which replaces the suffix. For `scab_v2.1`, `.1` is treated as a suffix and replaced with `.7z`, producing `scab_v2.7z` instead of `scab_v2.1.7z`. Fixing it to append the suffix instead:
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 2, lines removed: 1
```diff
@@ -567,5 +567,6 @@
rel = seq_dir.relative_to(RENDER_ROOT)
suffix = ".7z" if ZIPPER_TYPE == "7z" else ".zip"
- return (ARCHIVE_ROOT / rel).with_suffix(suffix)
+ # Append suffix instead of replacing, since directory names might have dots (e.g., "scab_v2.1")
+ return ARCHIVE_ROOT / f"{rel}{suffix}"
```
---
Tool use: **read_file** • Read file: zip_sequences.py
---
---
Fixing `sequence_dir_for` to handle this correctly:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 8, lines removed: 1
```diff
@@ -573,5 +573,12 @@
def sequence_dir_for(zip_path: Path) -> Path:
rel = zip_path.relative_to(ARCHIVE_ROOT)
- return (RENDER_ROOT / rel).with_suffix("")
+ # Remove the archive suffix (.7z or .zip) from the end
+ # Handle both .7z and .zip extensions
+ rel_str = str(rel)
+ if rel_str.endswith(".7z"):
+ rel_str = rel_str[:-3]
+ elif rel_str.endswith(".zip"):
+ rel_str = rel_str[:-4]
+ return RENDER_ROOT / rel_str
```
---
Tool use: **read_lints** • Read lints for 1 file
Lint paths:
- `zip_sequences.py`
Lint results
**No lint errors found**
---
---
Fixed the issue: `archive_path_for` used `.with_suffix()`, which replaced the suffix. For `scab_v2.1`, `.1` was treated as a suffix and replaced with `.7z`, producing `scab_v2.7z` instead of `scab_v2.1.7z`.
**Changes made:**
1. **`archive_path_for`**: Appends the suffix instead of replacing it, so `scab_v2.1` becomes `scab_v2.1.7z`
2. **`sequence_dir_for`**: Removes the archive extension (`.7z` or `.zip`) from the end to correctly map back to the sequence directory
When you run `ZipSeqArchv.bat` again:
- `scab_v2.1` will be archived as `scab_v2.1.7z` (not `scab_v2.7z`)
- The cleanup function will correctly match the archive to the sequence directory
- It won't be removed as an orphan
This also fixes the issue for any other directory names containing dots (e.g., `v2.1`, `test.1.2`, etc.).
---
_**User (2025-11-25 21:04Z)**_
is the compression lvl 0 working on this? it doesn't seem to be going very fast
PS P:\251119_Tatt2Away_How-it-Works\Renders> P:\251119_Tatt2Away_How-it-Works\Renders\ZipSeqArchv.bat
[init] Loading configuration sources...
[init] Reading project config at P:\251119_Tatt2Away_How-it-Works\.config\config.json
[init] Configuration loaded from project
[init] zip_sequences starting up...
[init] Working directory: P:\251119_Tatt2Away_How-it-Works
[init] Mode: zip, zipper: 7z, jobs arg: auto
[init] 7z executable: C:\ProgramData\chocolatey\bin\7z.EXE, Max7zInst: auto, method: LZMA2, level: 0
[init] Scanning sequences under \\NEXUS\proj\251119_Tatt2Away_How-it-Works\Renders
[scan] [1] Inspecting 231016_audio_skin-top_2_CH
[scan] Skipping 231016_audio_skin-top_2_CH (unchanged since archive)
[scan] [2] Inspecting scab_v2.1
[scan] Queued scab_v2.1 for compression (~1.31GB) [1 total]
[scan] [3] Inspecting tatt2away_process_v3
[scan] Skipping tatt2away_process_v3 (unchanged since archive)
[scan] [4] Inspecting Tatt2Away_stages_v2_CM
[scan] Skipping Tatt2Away_stages_v2_CM (unchanged since archive)
[zip] RAM: 102.2GB available (80% of total), 97.1GB for compression (95%)
[zip] Estimated RAM per sequence: 3.0GB total across 1 sequences
[zip] All sequences fit in RAM → using 1 workers (one per sequence)
[zip] Final worker count: 1 (requested: None)
[init] Preparing to compress 1 sequence(s) with 1 worker(s)
[zip] Renders\scab_v2.1 -> Renders\_zipped\scab_v2.1.7z
[zip] 7z command: C:\ProgramData\chocolatey\bin\7z.EXE a -y -bb0 -mx=0 -t7z -m0=LZMA2 -md=1024m -mmt=32 \\NEXUS\proj\251119_Tatt2Away_How-it-Works\Renders\_zipped\tmpq3l68ie4.7z @C:\Users\Nathan\AppData\Local\Temp\tmp57ak8mgp.lst
---
_**Agent (model default, mode Agent)**_
---
At compression level 0, the script still uses `-m0=LZMA2`, which compresses. At level 0, we should use `-m0=Copy` (store). Updating the code to use Copy when level is 0:
---
Tool use: **read_file** • Read file: zip_sequences.py
---
Tool use: **grep** • Grep for "COMPRESSION_LEVEL" in "zip_sequences.py" • 11 matches
Output mode: content
| File | Content | Line |
|------|------|------|
| `zip_sequences.py` | `COMPRESSION_LEVEL = CONFIG.get("compression", 9)` | L114 |
| `zip_sequences.py` | `if isinstance(COMPRESSION_LEVEL, str):` | L115 |
| `zip_sequences.py` | `COMPRESSION_LEVEL = int(COMPRESSION_LEVEL)` | L117 |
| `zip_sequences.py` | `COMPRESSION_LEVEL = 9` | L119 |
| `zip_sequences.py` | `if not isinstance(COMPRESSION_LEVEL, int):` | L120 |
| `zip_sequences.py` | `COMPRESSION_LEVEL = 9` | L121 |
| `zip_sequences.py` | `COMPRESSION_LEVEL = max(0, min(9, COMPRESSION_LEVEL))` | L122 |
| `zip_sequences.py` | `f"-mx=\{COMPRESSION_LEVEL\}",` | L645 |
| `zip_sequences.py` | `if COMPRESSION_LEVEL <= 0:` | L743 |
| `zip_sequences.py` | `zip_kwargs = \{"compresslevel": COMPRESSION_LEVEL\}` | L748 |
| `zip_sequences.py` | `log("init", f"7z executable: \{exe\}, Max7zInst: \{max_inst\}, method: \{COMPRESSION_METHOD\}, level: \{COMPRESSION_LEVEL\}")` | L1117 |
---
---
Tool use: **read_file** • Read file: zip_sequences.py
---
---
---
Updating the code to use Copy (store) when compression level is 0:
---
Tool use: **code_edit** • Edit file: zip_sequences.py
**Chunk 1**
Lines added: 42, lines removed: 35
```diff
@@ -648,40 +648,47 @@
# Set compression method and memory/dictionary size based on method
- # PPMd uses memory parameter in method string, LZMA2 uses -md flag
+ # At compression level 0, use Copy (store) method for maximum speed
FIXED_DICT_SIZE_MB = 1024
- if COMPRESSION_METHOD == "PPMd":
- # PPMd: specify memory as part of method string
- cmd.append(f"-m0=PPMd:mem={FIXED_DICT_SIZE_MB}m")
- elif COMPRESSION_METHOD == "LZMA2":
- # LZMA2: use -md for dictionary size
- cmd.append("-m0=LZMA2")
- cmd.append(f"-md={FIXED_DICT_SIZE_MB}m")
- elif COMPRESSION_METHOD == "BZip2":
- # BZip2: use -md for dictionary size (smaller max: 900KB)
- max_bzip2_dict = min(FIXED_DICT_SIZE_MB, 900) # BZip2 max is 900KB
- cmd.append("-m0=BZip2")
- cmd.append(f"-md={max_bzip2_dict}k")
- elif COMPRESSION_METHOD == "Deflate":
- # Deflate: doesn't use dictionary size parameter
- cmd.append("-m0=Deflate")
- else:
- # Fallback: use LZMA2
- cmd.append("-m0=LZMA2")
- cmd.append(f"-md={FIXED_DICT_SIZE_MB}m")
-
- # CPU thread allocation: when there's only 1 worker, use all CPU cores
- # When there are multiple workers, use auto mode to let 7z decide
- # Note: PPMd is single-threaded and won't benefit from -mmt
- cpu_cores = os.cpu_count() or 1
- if COMPRESSION_METHOD == "PPMd":
- # PPMd is single-threaded, so -mmt won't help
- # But we can still set it for consistency
- cmd.append("-mmt=on")
- elif worker_count == 1:
- # Single worker: use all CPU cores for maximum speed (LZMA2, BZip2, Deflate support this)
- cmd.append(f"-mmt={cpu_cores}")
- else:
- # Multiple workers: use auto mode (7z will manage threads)
- cmd.append("-mmt=on")
+ if COMPRESSION_LEVEL == 0:
+ # Level 0 = no compression, just store files (fastest)
+ cmd.append("-m0=Copy")
+ # Copy method doesn't need threading, but enable it anyway for consistency
+ cmd.append("-mmt=on")
+ else:
+ # Compression levels 1-9: use configured compression method
+ if COMPRESSION_METHOD == "PPMd":
+ # PPMd: specify memory as part of method string
+ cmd.append(f"-m0=PPMd:mem={FIXED_DICT_SIZE_MB}m")
+ elif COMPRESSION_METHOD == "LZMA2":
+ # LZMA2: use -md for dictionary size
+ cmd.append("-m0=LZMA2")
+ cmd.append(f"-md={FIXED_DICT_SIZE_MB}m")
+ elif COMPRESSION_METHOD == "BZip2":
+ # BZip2: use -md for dictionary size (smaller max: 900KB)
+ max_bzip2_dict = min(FIXED_DICT_SIZE_MB, 900) # BZip2 max is 900KB
+ cmd.append("-m0=BZip2")
+ cmd.append(f"-md={max_bzip2_dict}k")
+ elif COMPRESSION_METHOD == "Deflate":
+ # Deflate: doesn't use dictionary size parameter
+ cmd.append("-m0=Deflate")
+ else:
+ # Fallback: use LZMA2
+ cmd.append("-m0=LZMA2")
+ cmd.append(f"-md={FIXED_DICT_SIZE_MB}m")
+
+ # CPU thread allocation: when there's only 1 worker, use all CPU cores
+ # When there are multiple workers, use auto mode to let 7z decide
+ # Note: PPMd is single-threaded and won't benefit from -mmt
+ cpu_cores = os.cpu_count() or 1
+ if COMPRESSION_METHOD == "PPMd":
+ # PPMd is single-threaded, so -mmt won't help
+ # But we can still set it for consistency
+ cmd.append("-mmt=on")
+ elif worker_count == 1:
+ # Single worker: use all CPU cores for maximum speed (LZMA2, BZip2, Deflate support this)
+ cmd.append(f"-mmt={cpu_cores}")
+ else:
+ # Multiple workers: use auto mode (7z will manage threads)
+ cmd.append("-mmt=on")
cmd.extend([
```
---
Tool use: **read_lints** • Read lints for 1 file
Lint paths:
- `zip_sequences.py`
Lint results
**No lint errors found**
---
---
Fixed: at compression level 0, the script was still using LZMA2, which compresses. At level 0, it now uses `-m0=Copy` (store), which packages files without compression.
**What changed:**
- When `compression: 0` is set, the script now uses `-m0=Copy` instead of the configured compression method
- Copy mode stores files without compression, so it's limited mainly by disk I/O speed
- Compression levels 1-9 continue to use the configured method (LZMA2, PPMd, etc.)
When you run `ZipSeqArchv.bat` with `compression: 0`, it should be much faster since it's just copying files into the archive without compression. The 7z command will now show `-m0=Copy` instead of `-m0=LZMA2` when compression level is 0.
---
_**User (2025-11-25 21:04Z)**_
is good?
PS P:\251119_Tatt2Away_How-it-Works\Renders> P:\251119_Tatt2Away_How-it-Works\Renders\ZipSeqArchv.bat
[init] Loading configuration sources...
[init] Reading project config at P:\251119_Tatt2Away_How-it-Works\.config\config.json
[init] Configuration loaded from project
[init] zip_sequences starting up...
[init] Working directory: P:\251119_Tatt2Away_How-it-Works
[init] Mode: zip, zipper: 7z, jobs arg: auto
[init] 7z executable: C:\ProgramData\chocolatey\bin\7z.EXE, Max7zInst: auto, method: LZMA2, level: 0
[init] Scanning sequences under \\NEXUS\proj\251119_Tatt2Away_How-it-Works\Renders
[scan] [1] Inspecting 231016_audio_skin-top_2_CH
[scan] Skipping 231016_audio_skin-top_2_CH (unchanged since archive)
[scan] [2] Inspecting scab_v2.1
[scan] Queued scab_v2.1 for compression (~1.31GB) [1 total]
[scan] [3] Inspecting tatt2away_process_v3
[scan] Skipping tatt2away_process_v3 (unchanged since archive)
[scan] [4] Inspecting Tatt2Away_stages_v2_CM
[scan] Skipping Tatt2Away_stages_v2_CM (unchanged since archive)
[zip] RAM: 102.2GB available (80% of total), 97.1GB for compression (95%)
[zip] Estimated RAM per sequence: 3.0GB total across 1 sequences
[zip] All sequences fit in RAM → using 1 workers (one per sequence)
[zip] Final worker count: 1 (requested: None)
[init] Preparing to compress 1 sequence(s) with 1 worker(s)
[zip] Renders\scab_v2.1 -> Renders\_zipped\scab_v2.1.7z
[zip] 7z command: C:\ProgramData\chocolatey\bin\7z.EXE a -y -bb0 -mx=0 -t7z -m0=Copy -mmt=on \\NEXUS\proj\251119_Tatt2Away_How-it-Works\Renders\_zipped\tmpglnjsei8.7z @C:\Users\Nathan\AppData\Local\Temp\tmp_vsbg9m6.lst
[zip] 1/1 scab_v2.1
[zip] Updated 1 sequence archive(s).
[zip] Archives updated. Stage manually with `git add Renders/_zipped`, if desired.
[scan] Summary: scanned 4, quick-skipped 3, state-skipped 0, empty 0, queued 1
[zip] Scanning for orphan archives in \\NEXUS\proj\251119_Tatt2Away_How-it-Works\Renders\_zipped
[zip] Archive 231016_audio_skin-top_2_CH.7z has matching sequence directory; keeping
[zip] Archive scab_v2.1.7z has matching sequence directory; keeping
[zip] Archive tatt2away_process_v3.7z has matching sequence directory; keeping
[zip] Archive Tatt2Away_stages_v2_CM.7z has matching sequence directory; keeping
[zip] Removing orphan archive tmp4vygazhw.7z
[zip] Deleted archive: tmp4vygazhw.7z
[zip] Removing orphan archive tmp8caqu_bp.7z
[zip] Deleted archive: tmp8caqu_bp.7z
[zip] Removing orphan archive tmp8nuwz4x1.7z
[zip] Deleted archive: tmp8nuwz4x1.7z
[zip] Removing orphan archive tmpfs0howr_.7z
[zip] Deleted archive: tmpfs0howr_.7z
[zip] Removed 4 orphan archive(s).
---
_**Agent (model default, mode Agent)**_
---
Yes, it's working correctly.
1. Compression level 0 is working: the 7z command shows `-m0=Copy` instead of `-m0=LZMA2`, so it's using Copy (store) with no compression, which should be faster.
2. Archive path is correct: `scab_v2.1.7z` is created correctly (not `scab_v2.7z`).
3. Orphan cleanup is working: it found and removed 4 leftover temporary files (`tmp*.7z`) that should have been cleaned up.
Everything looks good. With compression level 0, archiving should be much faster since it's just copying files into the archive without compression.
---