Compare commits

...

8 Commits

Author SHA1 Message Date
95a19dceb3 npz to lfs 2025-11-27 13:01:31 -07:00
e8f66cd06f add unitypackage 2025-11-26 21:45:57 -07:00
c5a2463b91 specst 2025-11-26 18:43:40 -07:00
Nathan
1c86fa0c68 attr for flip fluids 2025-11-26 13:38:50 -07:00
Nathan
b67c2e7779 deploy zip&unzipseqs 2025-11-26 13:36:00 -07:00
Nathan
e1086312dd adjust gitattributes and ignore for physics and bakes 2025-11-26 13:00:35 -07:00
Nathan
ba46dce3a3 Merge remote-tracking branch 'HOME/main' 2025-11-26 10:00:38 -07:00
Nathan
c101fbb381 fix zipseq store not storing 2025-11-25 14:22:44 -07:00
8 changed files with 1333 additions and 55 deletions

View File

@@ -1,4 +1,4 @@
<!-- Generated by SpecStory, Markdown v2.0.0 --> <!-- Generated by SpecStory, Markdown v2.1.0 -->
<!-- cursor Session 1aae6d70-0daa-49a8-b8f1-7e9d36a76a3b (2025-11-11 01:50Z) --> <!-- cursor Session 1aae6d70-0daa-49a8-b8f1-7e9d36a76a3b (2025-11-11 01:50Z) -->
@@ -988,7 +988,7 @@ Reviewing the current code structure, then implementing the plan step by step.
--- ---
<tool-use data-tool-type="generic" data-tool-name="todo_write"> <tool-use data-tool-type="task" data-tool-name="todo_write">
<details><summary>Tool use: **todo_write** • Todo List</summary> <details><summary>Tool use: **todo_write** • Todo List</summary>
- [-] Add get_available_ram() function with psutil support and Windows fallback - [-] Add get_available_ram() function with psutil support and Windows fallback
@@ -1207,7 +1207,7 @@ Lines added: 38, lines removed: undefined
--- ---
<tool-use data-tool-type="generic" data-tool-name="todo_write"> <tool-use data-tool-type="task" data-tool-name="todo_write">
<details><summary>Tool use: **todo_write** • Todo List</summary> <details><summary>Tool use: **todo_write** • Todo List</summary>
- [x] Add get_available_ram() function with psutil support and Windows fallback - [x] Add get_available_ram() function with psutil support and Windows fallback
@@ -1411,7 +1411,7 @@ Lines added: 73, lines removed: 4
--- ---
<tool-use data-tool-type="generic" data-tool-name="todo_write"> <tool-use data-tool-type="task" data-tool-name="todo_write">
<details><summary>Tool use: **todo_write** • Todo List</summary> <details><summary>Tool use: **todo_write** • Todo List</summary>
- [x] Add get_available_ram() function with psutil support and Windows fallback - [x] Add get_available_ram() function with psutil support and Windows fallback
@@ -1472,7 +1472,7 @@ Lines added: 3, lines removed: undefined
--- ---
<tool-use data-tool-type="generic" data-tool-name="todo_write"> <tool-use data-tool-type="task" data-tool-name="todo_write">
<details><summary>Tool use: **todo_write** • Todo List</summary> <details><summary>Tool use: **todo_write** • Todo List</summary>
- [x] Add get_available_ram() function with psutil support and Windows fallback - [x] Add get_available_ram() function with psutil support and Windows fallback
@@ -1528,7 +1528,7 @@ Lines added: 4, lines removed: 2
--- ---
<tool-use data-tool-type="generic" data-tool-name="todo_write"> <tool-use data-tool-type="task" data-tool-name="todo_write">
<details><summary>Tool use: **todo_write** • Todo List</summary> <details><summary>Tool use: **todo_write** • Todo List</summary>
- [x] Add get_available_ram() function with psutil support and Windows fallback - [x] Add get_available_ram() function with psutil support and Windows fallback

View File

@@ -0,0 +1,197 @@
<!-- Generated by SpecStory, Markdown v2.1.0 -->
<!-- cursor Session 92e6d22f-d2e8-4786-8d47-b22f9893e990 (2025-11-26 20:08Z) -->
# Configure ConfigLoader for zipseq and unzipseq (2025-11-26 20:08Z)
_**User (2025-11-26 19:57Z)**_
@ConfigLoader.ps1 ConfigLoader should push zipseq and unzipseq to \Renders.
---
_**Agent (model default, mode Agent)**_
---
Checking for `zipseq` and `unzipseq` files and the `\Renders` directory structure.
---
<tool-use data-tool-type="generic" data-tool-name="glob_file_search">
<details>
<summary>Tool use: **glob_file_search** • Searched codebase "*zipseq*" • **2** results in **1** directory</summary>
Directory: **a:\1 Amazon_Active_Projects\3 ProjectStructure** (2 files)
| File |
|------|
| `ZipSeqArchv.bat` |
| `UnzipSeqArchv.bat` |
</details>
</tool-use>
---
<tool-use data-tool-type="generic" data-tool-name="glob_file_search">
<details>
<summary>Tool use: **glob_file_search** • Searched codebase "*unzipseq*" • **1** result in **1** directory</summary>
Directory: **a:\1 Amazon_Active_Projects\3 ProjectStructure** (1 file)
| File |
|------|
| `UnzipSeqArchv.bat` |
</details>
</tool-use>
---
<tool-use data-tool-type="search" data-tool-name="codebase_search">
<details>
<summary>Tool use: **codebase_search** • Searched codebase "Renders directory structure or path" • **14** results</summary>
| File | Lines |
|------|-------|
| `NewProject.bat` | L58-83 |
| `NewProject.bat` | L63 |
| `UpgradeToGitProj.bat` | L59 |
| `UpgradeToGitProj.bat` | L53-126 |
| `components/gitignore` | L123 |
| `zip_sequences.py` | L565-585 |
| `UpgradeToGitProj.bat` | L121-166 |
| `zip_sequences.py` | L102 |
| `UpgradeToGitProj.bat` | L160-218 |
| `ConfigLoader.ps1` | L59 |
| `GetStructDir.ps1` | L27-68 |
| `UpgradeToGitProj.bat` | L215-295 |
| `ZipSeqArchv.bat` | L44 |
| `UnzipSeqArchv.bat` | L42 |
</details>
</tool-use>
---
---
Adding logic to deploy `ZipSeqArchv.bat` and `UnzipSeqArchv.bat` to the project root's `\Renders` directory. Checking the current deployment logic:
---
<tool-use data-tool-type="read" data-tool-name="read_file">
<details><summary>Tool use: **read_file** • Read file: ConfigLoader.ps1</summary>
</details>
</tool-use>
---
---
Adding logic to deploy `ZipSeqArchv.bat` and `UnzipSeqArchv.bat` to the project root's `\Renders` directory:
---
<tool-use data-tool-type="write" data-tool-name="search_replace">
<details><summary>Tool use: **code_edit** • Edit file: ConfigLoader.ps1</summary>
**Chunk 1**
Lines added: 24, lines removed: undefined
```diff
@@ -317,4 +317,28 @@
Write-Host &quot;All batch files refreshed successfully.&quot; -ForegroundColor Green
+
+ # Deploy ZipSeqArchv.bat and UnzipSeqArchv.bat to \Renders
+ Write-Host &quot;&#96;n=== Deploying ZipSeqArchv.bat and UnzipSeqArchv.bat to \Renders ===&quot; -ForegroundColor Magenta
+ $rendersDir = Join-Path -Path $resolvedProject -ChildPath &#39;Renders&#39;
+ if (-not (Test-Path -LiteralPath $rendersDir -PathType Container)) {
+ New-Item -Path $rendersDir -ItemType Directory -Force | Out-Null
+ Write-Host &quot;Created Renders directory: $rendersDir&quot; -ForegroundColor Cyan
+ }
+ $zipFiles = @(
+ @{ Name = &#39;ZipSeqArchv.bat&#39;; Source = Join-Path -Path $structDir -ChildPath &#39;ZipSeqArchv.bat&#39; },
+ @{ Name = &#39;UnzipSeqArchv.bat&#39;; Source = Join-Path -Path $structDir -ChildPath &#39;UnzipSeqArchv.bat&#39; }
+ )
+ foreach ($zipFile in $zipFiles) {
+ $targetPath = Join-Path -Path $rendersDir -ChildPath $zipFile.Name
+ try {
+ Copy-Item -Path $zipFile.Source -Destination $targetPath -Force
+ Write-Host &quot;[OK] $targetPath&quot; -ForegroundColor Green
+ }
+ catch {
+ Write-Host &quot;[FAIL] $targetPath&quot; -ForegroundColor Red
+ Write-Host &quot; $($_.Exception.Message)&quot; -ForegroundColor DarkRed
+ exit 1
+ }
+ }
# Deploy .gitattributes and .gitignore to project root if they don&#39;t exist
```
</details>
</tool-use>
---
---
Checking for linting errors:
---
<tool-use data-tool-type="generic" data-tool-name="read_lints">
<details>
<summary>Tool use: **read_lints** • Read lints for 1 file</summary>
Lint paths:
- `ConfigLoader.ps1`
Lint results
**No lint errors found**
</details>
</tool-use>
---
---
Updated `ConfigLoader.ps1` to deploy `ZipSeqArchv.bat` and `UnzipSeqArchv.bat` to the project root's `\Renders` directory.
The changes:
1. **Creates the `Renders` directory** if it doesn't exist
2. **Deploys both batch files** to `\Renders` in the project root
3. **Provides status messages** for each deployment
This follows the same pattern as the git files deployment and runs after the recursive batch file updates and before the git files deployment. The files are copied to `\Renders` even if they already exist elsewhere in the project.
---

View File

@@ -317,6 +317,30 @@ if ($MyInvocation.InvocationName -ne '.') {
Write-Host "All batch files refreshed successfully." -ForegroundColor Green Write-Host "All batch files refreshed successfully." -ForegroundColor Green
# Deploy ZipSeqArchv.bat and UnzipSeqArchv.bat to \Renders
Write-Host "`n=== Deploying ZipSeqArchv.bat and UnzipSeqArchv.bat to \Renders ===" -ForegroundColor Magenta
$rendersDir = Join-Path -Path $resolvedProject -ChildPath 'Renders'
if (-not (Test-Path -LiteralPath $rendersDir -PathType Container)) {
New-Item -Path $rendersDir -ItemType Directory -Force | Out-Null
Write-Host "Created Renders directory: $rendersDir" -ForegroundColor Cyan
}
$zipFiles = @(
@{ Name = 'ZipSeqArchv.bat'; Source = Join-Path -Path $structDir -ChildPath 'ZipSeqArchv.bat' },
@{ Name = 'UnzipSeqArchv.bat'; Source = Join-Path -Path $structDir -ChildPath 'UnzipSeqArchv.bat' }
)
foreach ($zipFile in $zipFiles) {
$targetPath = Join-Path -Path $rendersDir -ChildPath $zipFile.Name
try {
Copy-Item -Path $zipFile.Source -Destination $targetPath -Force
Write-Host "[OK] $targetPath" -ForegroundColor Green
}
catch {
Write-Host "[FAIL] $targetPath" -ForegroundColor Red
Write-Host " $($_.Exception.Message)" -ForegroundColor DarkRed
exit 1
}
}
# Deploy .gitattributes and .gitignore to project root if they don't exist # Deploy .gitattributes and .gitignore to project root if they don't exist
Write-Host "`n=== Deploying .gitattributes and .gitignore to project root ===" -ForegroundColor Magenta Write-Host "`n=== Deploying .gitattributes and .gitignore to project root ===" -ForegroundColor Magenta
$gitFiles = @( $gitFiles = @(

View File

@@ -40,6 +40,16 @@
*.obj filter=lfs diff=lfs merge=lfs -text *.obj filter=lfs diff=lfs merge=lfs -text
*.usd filter=lfs diff=lfs merge=lfs -text *.usd filter=lfs diff=lfs merge=lfs -text
*.vdb filter=lfs diff=lfs merge=lfs -text *.vdb filter=lfs diff=lfs merge=lfs -text
*.bphys filter=lfs diff=lfs merge=lfs -text
*.bobj filter=lfs diff=lfs merge=lfs -text
*.bvel filter=lfs diff=lfs merge=lfs -text
*.bpointcache filter=lfs diff=lfs merge=lfs -text
*.blob filter=lfs diff=lfs merge=lfs -text
*.unitypackage filter=lfs diff=lfs merge=lfs -text
# Flip Fluids cache files
*.data filter=lfs diff=lfs merge=lfs -text
*.sqlite3 filter=lfs diff=lfs merge=lfs -text
# Substance files # Substance files
*.sbs filter=lfs diff=lfs merge=lfs -text *.sbs filter=lfs diff=lfs merge=lfs -text
@@ -81,3 +91,6 @@
# Other # Other
*.uni filter=lfs diff=lfs merge=lfs -text *.uni filter=lfs diff=lfs merge=lfs -text
*.pdn filter=lfs diff=lfs merge=lfs -text *.pdn filter=lfs diff=lfs merge=lfs -text
# Python files
*.npz filter=lfs diff=lfs merge=lfs -text

View File

@@ -41,13 +41,6 @@
*.tmp *.tmp
*.temp *.temp
# Blender internal cache files
*.bpy
*.bmesh
*.bvh
*.bobj
*.bphys
# OS generated files # OS generated files
.DS_Store .DS_Store
.DS_Store? .DS_Store?

View File

@@ -2,7 +2,7 @@
"dailyFormat": "daily_YYMMDD", "dailyFormat": "daily_YYMMDD",
"structDir": "A:\\1 Amazon_Active_Projects\\3 ProjectStructure", "structDir": "A:\\1 Amazon_Active_Projects\\3 ProjectStructure",
"zipper": "7z", "zipper": "7z",
"compression": 5, "compression": 0,
"Max7zInst": 0 "Max7zInst": 0
} }

View File

@@ -476,15 +476,42 @@ def is_archive_path(path: Path) -> bool:
return any(part in ("_archive", "_CURRENT") for part in path.parts) return any(part in ("_archive", "_CURRENT") for part in path.parts)
def find_sequence_dirs(root: Path) -> Iterator[Path]: def find_sequence_dirs(root: Path, *, verbose: bool = False) -> Iterator[Path]:
seen_dirs = set() # Track directories we've already yielded to avoid duplicates
for dirpath, dirnames, filenames in os.walk(root): for dirpath, dirnames, filenames in os.walk(root):
path = Path(dirpath) path = Path(dirpath)
dirnames[:] = [d for d in dirnames if d not in ("_archive", "_CURRENT")] dirnames[:] = [d for d in dirnames if d not in ("_archive", "_CURRENT")]
if is_archive_path(path): if is_archive_path(path):
if verbose:
rel = path.relative_to(root) if path.is_relative_to(root) else path
log("scan", f"Skipping archive path: {rel}", verbose_only=True, verbose=verbose)
continue continue
# Check if this directory has sequence files directly
has_frames = any(Path(dirpath, f).suffix.lower() in SEQUENCE_EXTENSIONS for f in filenames) has_frames = any(Path(dirpath, f).suffix.lower() in SEQUENCE_EXTENSIONS for f in filenames)
if has_frames: if has_frames:
yield path path_resolved = path.resolve()
if path_resolved not in seen_dirs:
seen_dirs.add(path_resolved)
yield path
elif verbose:
# Log directories that don't have sequence files for debugging
rel = path.relative_to(root) if path.is_relative_to(root) else path
if "scab" in path.name.lower():
# Special logging for directories that might be sequences
frame_extensions = [f for f in filenames if Path(f).suffix.lower() in SEQUENCE_EXTENSIONS]
all_files = list(path.iterdir()) if path.exists() else []
log("scan", f"Directory {rel}: {len(filenames)} files in dir, {len(frame_extensions)} with sequence extensions, {len(all_files)} total items", verbose_only=True, verbose=verbose)
# Check subdirectories
for subdir in dirnames[:5]: # Check first 5 subdirs
subdir_path = path / subdir
try:
subdir_files = list(subdir_path.iterdir()) if subdir_path.exists() else []
subdir_frame_files = [f for f in subdir_files if f.is_file() and f.suffix.lower() in SEQUENCE_EXTENSIONS]
if subdir_frame_files:
log("scan", f" Subdirectory {subdir} has {len(subdir_frame_files)} sequence files", verbose_only=True, verbose=verbose)
except (OSError, PermissionError):
pass
def iter_sequence_files(seq_dir: Path) -> Iterator[Path]: def iter_sequence_files(seq_dir: Path) -> Iterator[Path]:
@@ -539,12 +566,20 @@ def state_changed(seq_state: dict, stored_state: dict | None) -> bool:
def archive_path_for(seq_dir: Path) -> Path: def archive_path_for(seq_dir: Path) -> Path:
rel = seq_dir.relative_to(RENDER_ROOT) rel = seq_dir.relative_to(RENDER_ROOT)
suffix = ".7z" if ZIPPER_TYPE == "7z" else ".zip" suffix = ".7z" if ZIPPER_TYPE == "7z" else ".zip"
return (ARCHIVE_ROOT / rel).with_suffix(suffix) # Append suffix instead of replacing, since directory names might have dots (e.g., "scab_v2.1")
return ARCHIVE_ROOT / f"{rel}{suffix}"
def sequence_dir_for(zip_path: Path) -> Path: def sequence_dir_for(zip_path: Path) -> Path:
rel = zip_path.relative_to(ARCHIVE_ROOT) rel = zip_path.relative_to(ARCHIVE_ROOT)
return (RENDER_ROOT / rel).with_suffix("") # Remove the archive suffix (.7z or .zip) from the end
# Handle both .7z and .zip extensions
rel_str = str(rel)
if rel_str.endswith(".7z"):
rel_str = rel_str[:-3]
elif rel_str.endswith(".zip"):
rel_str = rel_str[:-4]
return RENDER_ROOT / rel_str
def state_path_for(zip_path: Path) -> Path: def state_path_for(zip_path: Path) -> Path:
@@ -612,42 +647,49 @@ def zip_sequence(seq_dir: Path, zip_path: Path, per_job_memory_limit: int | None
] ]
# Set compression method and memory/dictionary size based on method # Set compression method and memory/dictionary size based on method
# PPMd uses memory parameter in method string, LZMA2 uses -md flag # At compression level 0, use Copy (store) method for maximum speed
FIXED_DICT_SIZE_MB = 1024 FIXED_DICT_SIZE_MB = 1024
if COMPRESSION_METHOD == "PPMd": if COMPRESSION_LEVEL == 0:
# PPMd: specify memory as part of method string # Level 0 = no compression, just store files (fastest)
cmd.append(f"-m0=PPMd:mem={FIXED_DICT_SIZE_MB}m") cmd.append("-m0=Copy")
elif COMPRESSION_METHOD == "LZMA2": # Copy method doesn't need threading, but enable it anyway for consistency
# LZMA2: use -md for dictionary size cmd.append("-mmt=on")
cmd.append("-m0=LZMA2")
cmd.append(f"-md={FIXED_DICT_SIZE_MB}m")
elif COMPRESSION_METHOD == "BZip2":
# BZip2: use -md for dictionary size (smaller max: 900KB)
max_bzip2_dict = min(FIXED_DICT_SIZE_MB, 900) # BZip2 max is 900KB
cmd.append("-m0=BZip2")
cmd.append(f"-md={max_bzip2_dict}k")
elif COMPRESSION_METHOD == "Deflate":
# Deflate: doesn't use dictionary size parameter
cmd.append("-m0=Deflate")
else: else:
# Fallback: use LZMA2 # Compression levels 1-9: use configured compression method
cmd.append("-m0=LZMA2") if COMPRESSION_METHOD == "PPMd":
cmd.append(f"-md={FIXED_DICT_SIZE_MB}m") # PPMd: specify memory as part of method string
cmd.append(f"-m0=PPMd:mem={FIXED_DICT_SIZE_MB}m")
elif COMPRESSION_METHOD == "LZMA2":
# LZMA2: use -md for dictionary size
cmd.append("-m0=LZMA2")
cmd.append(f"-md={FIXED_DICT_SIZE_MB}m")
elif COMPRESSION_METHOD == "BZip2":
# BZip2: use -md for dictionary size (smaller max: 900KB)
max_bzip2_dict = min(FIXED_DICT_SIZE_MB, 900) # BZip2 max is 900KB
cmd.append("-m0=BZip2")
cmd.append(f"-md={max_bzip2_dict}k")
elif COMPRESSION_METHOD == "Deflate":
# Deflate: doesn't use dictionary size parameter
cmd.append("-m0=Deflate")
else:
# Fallback: use LZMA2
cmd.append("-m0=LZMA2")
cmd.append(f"-md={FIXED_DICT_SIZE_MB}m")
# CPU thread allocation: when there's only 1 worker, use all CPU cores # CPU thread allocation: when there's only 1 worker, use all CPU cores
# When there are multiple workers, use auto mode to let 7z decide # When there are multiple workers, use auto mode to let 7z decide
# Note: PPMd is single-threaded and won't benefit from -mmt # Note: PPMd is single-threaded and won't benefit from -mmt
cpu_cores = os.cpu_count() or 1 cpu_cores = os.cpu_count() or 1
if COMPRESSION_METHOD == "PPMd": if COMPRESSION_METHOD == "PPMd":
# PPMd is single-threaded, so -mmt won't help # PPMd is single-threaded, so -mmt won't help
# But we can still set it for consistency # But we can still set it for consistency
cmd.append("-mmt=on") cmd.append("-mmt=on")
elif worker_count == 1: elif worker_count == 1:
# Single worker: use all CPU cores for maximum speed (LZMA2, BZip2, Deflate support this) # Single worker: use all CPU cores for maximum speed (LZMA2, BZip2, Deflate support this)
cmd.append(f"-mmt={cpu_cores}") cmd.append(f"-mmt={cpu_cores}")
else: else:
# Multiple workers: use auto mode (7z will manage threads) # Multiple workers: use auto mode (7z will manage threads)
cmd.append("-mmt=on") cmd.append("-mmt=on")
cmd.extend([ cmd.extend([
str(temp_zip_abs), str(temp_zip_abs),
@@ -796,7 +838,7 @@ def run_zip(requested_workers: int | None, *, verbose: bool) -> int:
queued = 0 queued = 0
if RENDER_ROOT.exists(): if RENDER_ROOT.exists():
for seq_dir in find_sequence_dirs(RENDER_ROOT): for seq_dir in find_sequence_dirs(RENDER_ROOT, verbose=verbose):
total_scanned += 1 total_scanned += 1
rel = seq_dir.relative_to(RENDER_ROOT) rel = seq_dir.relative_to(RENDER_ROOT)
if total_scanned <= 5 or total_scanned % 10 == 0: if total_scanned <= 5 or total_scanned % 10 == 0: