# AME Remote Job Manager A web-based job submission and monitoring tool for Adobe Media Encoder (AME) in a Grassvalley AMPP environment. Editors upload `.prproj` files through the browser, the server remaps `.gves` proxy paths to high-resolution UNC paths, then delivers the remapped project to AME's watch folder for automated rendering. ## How It Works 1. Editor uploads a Premiere Pro `.prproj` file via the web UI 2. The server parses and remaps any `.gves` proxy media paths to their high-res UNC equivalents 3. The remapped project file is written to AME's watch folder 4. AME picks up the file, renders it, and writes output to the output folder 5. The job manager polls both folders and updates job status automatically: `queued → encoding → complete` ## Features - `.prproj` path remapping — replaces `.gves` proxy references with full-resolution UNC paths - Dry-run analysis mode — inspect what paths would be remapped before submitting - Real-time job status tracking via watch folder and output folder polling - AME log parsing — reads `AMEEncodingLog.txt` and error logs for encoding stats - SMB/UNC path configuration via the settings UI - Session-based authentication - Docker-ready with volume-backed persistent storage ## Prerequisites - Docker and Docker Compose (with `SYS_ADMIN` capability enabled for SMB mounting) - Adobe Media Encoder running on a Windows machine with a configured watch folder - SMB network share accessible at `//172.18.210.5/ame` containing: - `watch` — folder where .prproj files are picked up by AME - `output` — folder where AME writes encoded files - `logs` — folder where AME writes `AMEEncodingLog.txt` - Network connectivity from Docker host to SMB server - `.prproj` files that reference `.gves` proxy media on the AMPP platform ## Quick Start 1. Clone the repo and copy the example env file: ```bash cp .env.example .env ``` 2. Edit `.env`: ```env PORT=3100 AUTH_USER=admin AUTH_PASS=changeme # Docker volume mount paths WATCH_FOLDER=/watch OUTPUT_FOLDER=/output AME_LOG_DIR=/ame-logs # Polling interval (ms) POLL_INTERVAL_MS=5000 # Job timeout — mark as error if AME hasn't produced output after this long (ms) JOB_TIMEOUT_MS=3600000 ``` 3. **Mount the SMB share on the Docker host** (one-time setup): ```bash # Create mount point sudo mkdir -p /mnt/smb-ame # Mount the SMB share (adjust credentials/IP as needed) sudo mount -t cifs //172.18.210.5/ame /mnt/smb-ame \ -o username=smb,password=Production2020!,uid=1000,gid=1000,file_mode=0755,dir_mode=0755,vers=3.0 # Verify mount succeeded mount | grep cifs ``` 4. Configure `docker-compose.yml` with bind-mounts to subdirectories, then start: ```bash docker compose up -d ``` 5. Open `http://localhost:3100` in your browser and log in. ## Docker Compose Configuration The key to this architecture is binding SMB subdirectories from the host into the container paths the app expects: ```yaml services: ame-job-manager: build: . ports: - "3100:3100" volumes: # Bind SMB subdirectories from host into container # The host must have these mounted (e.g., at /mnt/smb-ame) from the SMB server - /mnt/smb-ame/Watch:/watch # Where AME watches for new .prproj files - /mnt/smb-ame/Output:/output # Where AME writes encoded output - /mnt/smb-ame/Logs:/ame-logs # Where AME writes AMEEncodingLog.txt # Persistent storage for job records and session data - app_data:/data - upload_tmp:/tmp/uploads env_file: .env restart: unless-stopped volumes: app_data: upload_tmp: ``` **Important**: The paths `/mnt/smb-ame/Watch`, `/mnt/smb-ame/Output`, and `/mnt/smb-ame/Logs` must exist on the Docker host after mounting. Create them if they don't exist: ```bash mkdir -p /mnt/smb-ame/{Watch,Output,Logs} ``` ## Architecture: Docker Container with Host-Level SMB Mounting The job manager runs in Docker but needs access to SMB network shares for the watch folder, output folder, and AME logs. The architecture accomplishes this by: 1. **Host-level SMB mount**: The Docker host mounts the SMB share using native Linux `mount -t cifs`, making it available at a path like `/mnt/smb-ame` 2. **Bind-mounts from host to container**: `docker-compose.yml` binds specific subdirectories from the host into the container: - `/mnt/smb-ame/Watch` → `/watch` (inside container) - `/mnt/smb-ame/Output` → `/output` (inside container) - `/mnt/smb-ame/Logs` → `/ame-logs` (inside container) 3. **Application accesses local paths**: The Node.js app reads/writes to `/watch`, `/output`, `/ame-logs` as if they were local, unaware of the SMB infrastructure ### Why This Approach? We chose host-level SMB mounting over container-level mounting for several reasons: - **Capability constraints**: Docker requires `SYS_ADMIN` capability and `apparmor=unconfined` to mount CIFS from within a container. This introduces security surface and may fail depending on host kernel/Docker daemon version. - **Reliability**: SMB mounts are more stable and persistent when managed by the host OS (systemd, `/etc/fstab`) rather than ephemeral container entrypoint scripts. - **Separation of concerns**: The container doesn't need to know or care about SMB credentials—the host handles authentication, the container just accesses mounted paths. - **Volume flexibility**: If the SMB share is ever replaced with local storage or a different protocol, only the host mount needs to change; the container remains unaware. ### SMB Network Share Configuration The watch folder, output folder, and logs are expected to be on a shared network location (SMB/CIFS). The Docker host must mount this share at the path specified in `docker-compose.yml` volumes. ### Configuration Steps 1. **Start the container** (it will run with fallback Docker volumes if no SMB credentials yet) 2. **Open the Settings panel** in the web UI (⚙️ icon in header) 3. **Fill in SMB credentials:** - **SMB Username**: e.g., `encoder` or `DOMAIN\encoder` - **SMB Password**: Network password (stored server-side, never exposed to browser) - **SMB Domain/Workgroup**: Optional, e.g., `WORKGROUP` or `BMG` - **Notes**: Optional reference, e.g., `\\172.18.210.5\ame` 4. **Create subdirectories on the SMB share** (if they don't exist): ``` \\172.18.210.5\ame\ ├── watch\ (AME's watch folder) ├── output\ (AME's output folder) └── logs\ (Contains AMEEncodingLog.txt) ``` 5. **Restart the container** — it will mount the SMB share on next startup ### Troubleshooting SMB Mount Issues - **Mount failed**: Check that credentials are correct and the SMB server is reachable - **Permission denied**: Verify the SMB user has read/write access to the share - **Container falls back to local volumes**: Check Docker logs for mount errors: `docker logs ame-job-manager` - **Shares already mounted locally**: If the watch/output/logs folders are already mounted on the Docker host via `/etc/fstab`, use bind-mounts in `docker-compose.yml` instead: ```yaml volumes: - /mnt/host-smb-share/watch:/watch - /mnt/host-smb-share/output:/output - /mnt/host-smb-share/logs:/ame-logs ``` In this case, you don't need to configure SMB credentials in the app settings. ## Job Lifecycle | Status | Meaning | |--------|---------| | `queued` | File written to watch folder, waiting for AME to pick it up | | `encoding` | File disappeared from watch folder — AME is actively rendering | | `complete` | Output file detected in the output folder | | `error` | Job timed out or AME log reported an error | The server polls both folders every `POLL_INTERVAL_MS` milliseconds to detect status transitions. ## Path Remapping The core function of this tool is to make `.prproj` files renderable on a high-res render machine. Premiere projects created on AMPP workstations often reference `.gves` proxy files. This server rewrites those references to the corresponding UNC/high-res paths before handing the project to AME. The remapping logic lives in `prproj-remapper.js`. You can test a remap without submitting a job using the analyze endpoint or the dry-run button in the UI. ## API Reference | Method | Endpoint | Description | |--------|----------|-------------| | POST | `/api/login` | Authenticate and get a session ID | | POST | `/api/logout` | End session | | POST | `/api/jobs` | Submit a `.prproj` file and create a job | | POST | `/api/jobs/analyze` | Dry-run analysis of a `.prproj` (no submission) | | GET | `/api/jobs` | List all jobs | | GET | `/api/jobs/:id` | Get a single job by ID | | DELETE | `/api/jobs/:id` | Delete a job record | | GET | `/api/status` | System status — folder health, job counts, AME log stats | | GET | `/api/ame/logs` | Full AME log data with recent entries | | GET | `/api/settings` | Get current settings | | POST | `/api/settings` | Update settings (watch folder, SMB config, etc.) | All endpoints except `/api/login` require an `x-session-id` header from a valid login. ## Environment Variables | Variable | Default | Description | |----------|---------|-------------| | `PORT` | `3100` | HTTP port | | `AUTH_USER` | `admin` | Login username | | `AUTH_PASS` | `password` | Login password | | `WATCH_FOLDER` | `/watch` | Path AME watches for new projects | | `OUTPUT_FOLDER` | `/output` | Path AME writes rendered output to | | `DATA_DIR` | `/data` | Persistent storage for job records and sessions | | `UPLOAD_TEMP` | `/tmp/uploads` | Temp dir for incoming file uploads | | `POLL_INTERVAL_MS` | `5000` | How often to poll watch/output folders (ms) | | `JOB_TIMEOUT_MS` | `3600000` | Time before a stuck job is marked as error (ms) | | `AME_LOG_DIR` | `/ame-logs` | Directory containing `AMEEncodingLog.txt` | ## Architecture Decisions ### Host-Level SMB Mounting (Final Approach) **Decision**: Mount the SMB share on the Docker host at `/mnt/smb-ame`, then bind-mount individual subdirectories into the container. **Why not container-level mounting?** - Container-level CIFS mounting requires Docker `SYS_ADMIN` capability and disabling AppArmor, introducing security risks - Host kernel version or Docker daemon configuration may not support container-level mounts - SMB credentials in container entrypoint scripts are harder to manage and rotate **Why not a single mount point?** - Initial implementation mounted `/mnt/smb-ame` as `/smb-share` in the container, then created separate Docker volumes for `/watch`, `/output`, `/ame-logs` - This caused uploaded files to go to ephemeral Docker volumes instead of the SMB share - Fixed by binding each SMB subdirectory directly to its container path **Final solution**: - Host mounts SMB at `/mnt/smb-ame` (persists in `/etc/fstab` or systemd automount) - `docker-compose.yml` specifies three bind-mounts: ```yaml - /mnt/smb-ame/Watch:/watch - /mnt/smb-ame/Output:/output - /mnt/smb-ame/Logs:/ame-logs ``` - App reads/writes to `/watch`, `/output`, `/ame-logs` as local paths - Files automatically appear on the SMB share where AME can access them ### Separation of Concerns The Docker container intentionally does NOT handle SMB mounting. This separation ensures: - **Security**: SMB credentials live on the host, never in container code or `.env` files - **Reliability**: Host OS manages mount persistence; container restart doesn't affect SMB access - **Portability**: Container works with any mounted filesystem (SMB, NFS, local, etc.) — no protocol assumptions - **Operations**: Infrastructure team manages storage layer; application team manages app container ### Why .prproj Path Remapping? The core value of this tool is translating proxy-resolution Premiere projects to high-resolution paths: - **AMPP Workflow**: Editors on Grassvalley AMPP workstations edit with `.gves` proxy media for responsiveness - **Problem**: AME on a dedicated render machine can't resolve `.gves` paths — they're relative to AMPP infrastructure - **Solution**: Before handing the project to AME, rewrite all `.gves` references to their corresponding high-resolution UNC paths - **Result**: AME can render the full-resolution media without the editor needing to manage two versions of the project ### Why Not Auto-Generate Remappings? The remapping rules are configured manually in the app settings because: - `.gves` files and their high-res equivalents may not follow a consistent naming pattern - Different projects may use different proxy strategies - Manual configuration is explicit and auditable — you can see what gets remapped and why ## Troubleshooting ### Files Don't Appear on SMB Share After Upload **Symptom**: File appears in job queue but isn't visible at `//172.18.210.5/ame/Watch` on the network. **Check 1**: Verify SMB is mounted on the host ```bash mount | grep cifs ``` **Check 2**: Verify subdirectories exist and are accessible ```bash ls -la /mnt/smb-ame/Watch ls -la /mnt/smb-ame/Output ls -la /mnt/smb-ame/Logs ``` **Check 3**: Verify container has the bind-mounts ```bash docker inspect ame-job-manager | jq '.Mounts[] | {Source, Destination}' ``` **Check 4**: Verify file is actually written to `/watch` inside the container ```bash docker exec ame-job-manager ls -la /watch/ ``` **If file is in `/watch` inside container but not on host SMB share**: The bind-mount isn't working. Verify: - `/mnt/smb-ame/Watch` exists on the host - SMB is mounted (check `mount | grep cifs`) - File permissions allow writing (`ls -la /mnt/smb-ame`) ### SMB Mount Fails on Host **Symptom**: `mount -t cifs` command fails with "Permission denied" or "Connection refused" **Check credentials**: ```bash # Test with correct username/password/domain sudo mount -t cifs //172.18.210.5/ame /mnt/smb-ame \ -o username=smb,password=YourPassword,domain=WORKGROUP ``` **Check network connectivity**: ```bash # Can we reach the SMB server? ping 172.18.210.5 # Can we list the share? smbclient -L //172.18.210.5 -U smb -p YourPassword ``` **Check subdirectories exist on the share**: ```bash smbclient //172.18.210.5/ame -U smb -p YourPassword > ls ``` ### Container Starts but No Files Appear **Symptom**: Container is running, upload succeeds, but files don't show up anywhere. **Check 1**: Are there any uploads at all? ```bash docker exec ame-job-manager find /tmp/uploads -type f ``` **Check 2**: Are files being written to `/watch`? ```bash docker exec ame-job-manager ls -la /watch/ ``` **Check 3**: Check Docker logs for errors during startup ```bash docker logs ame-job-manager ``` Look for mount errors or permission issues in the logs.