Skip to content

Architecture Overview

This page explains how ntkDeploy stores data, how the app starts up, how profiles are built into deployable artifacts, and how those artifacts reach Windows devices. Read this page if you want to understand why the app behaves the way it does before you start working through tutorials.

Prerequisites

  • Familiarity with Windows networking (UNC paths, SMB shares) and basic configuration management concepts.
  • See the Glossary for any unfamiliar terms.

Local-Only Data Storage

ntkDeploy stores all data locally on the administrator's workstation using an embedded SQLite database accessed through the Drift ORM. There is no cloud database, no central server, and no requirement for internet connectivity.

This design has direct operational consequences:

Consequence Detail
No login required The app opens directly — there are no ntkDeploy cloud accounts or credentials.
Data is per-workstation Profile, assignment, and audit data lives in the local database file. If multiple administrators each run ntkDeploy, they each maintain their own local copy.
Offline capable You can create and edit profiles without network access. Deployment and policy operations require intranet connectivity to UNC shares and the Policy Manager endpoint.
Backup is your responsibility Back up the SQLite database file along with normal workstation backups if you need to preserve history.

Database Schema

The database contains the following primary tables (schema version 8):

Table Purpose
profiles Profile metadata — name, environment, department, priority
profile_versions Versioned settings JSON per profile
device_groups Deployment target groups with UNC paths
device_group_paths Individual UNC paths within a device group
assignments Links between profile versions and device groups
rollout_events Per-device deployment outcome records
audit_log_entries Append-only change and action history
providers Cloud provider instance configurations
settings Application settings (theme, server endpoints, cert paths)

App Boot Sequence

Every time ntkDeploy starts, the following sequence runs before any UI appears:

main() → bootstrap()
           │
           ├─ Attach FlutterError handler (logs errors to console)
           ├─ Install AppBlocObserver (logs BLoC state changes with secret redaction)
           ├─ ServiceLocator.instance.initialize()
           │     ├─ Open AppDatabase (SQLite file)
           │     ├─ Instantiate repositories (Profile, Provider, Assignment, Audit, …)
           │     ├─ Initialise SchemaRegistry (registers all profile schemas)
           │     ├─ Instantiate services (ConfigBuildService, DeploymentService, …)
           │     └─ PolicyApiClient / PolicyRepository left null — created only after
           │          policy server credentials/settings are supplied by the user
           └─ runApp(App())
                 └─ AppShell (navigation, environment sidebar, global search)

The Service Locator is a singleton. Core repositories and services are created once and reused for the lifetime of the app session. The widget tree is only constructed after initialization completes successfully. Policy server connectivity is polled on a background timer that activates once valid policy credentials are configured in Settings — not at startup.


Schema-Driven Profile Editing

Profile forms in ntkDeploy are generated at runtime from registered Schema definitions rather than being hard-coded widgets. This means:

  1. The Schema Registry maps a profile type identifier (for example, "appconfig-v1") to a schema definition that declares all fields, their types, validation rules, and display labels.
  2. When you open the Create or Edit Profile form, the UI reads the active schema and renders the appropriate field widgets automatically.
  3. As you edit, each change is validated against the schema rules. The live JSON preview reflects exactly what the resulting configuration JSON will contain.
  4. A Profile Version is only marked valid when all schema rules pass. Only valid versions can be used for deployment.

This architecture means new profile types can be added to the registry without changing any form UI code.


Config Build Pipeline

When you run a deployment, the following pipeline executes before any file is written to the network:

Profile Version (valid settings JSON)
        │
        ▼
 ConfigBuildService
   ├─ Merges settings JSON
   ├─ Resolves Provider credentials → embeds under "cloudProviders"
   ├─ Requests Policy Snapshot → embeds snapshot reference + payload
   └─ Produces Artifact JSON
        │
        ▼
 DeploymentService
   ├─ Backs up existing config at UNC Path (\\server\share\path\appconfig.json)
   └─ Writes Artifact JSON to UNC Path
        │
        ▼
 Rollout Controller / Repository
   └─ Records RolloutEvent (succeeded / failed) per device path
        │
        ▼
 AuditRepository
   └─ Appends AuditLogEntry (actor, action, entity, timestamp)

Important: Deployment artifacts include resolved Provider credentials. Treat artifact JSON files as sensitive and restrict read access on the SMB share accordingly.


UNC/SMB Deployment Model

ntkDeploy delivers configurations by writing files directly to Windows network shares. No agent software needs to be installed on target devices beyond ntkDrive itself. The deployment model works as follows:

  1. Each Device Group holds one or more \\server\share\path UNC addresses that point to directories on SMB shares accessible from the administrator's workstation.
  2. The Deployment Service authenticates using the workstation's current Windows credentials (pass-through authentication) — no separate deployment credentials are stored in ntkDeploy.
  3. Before writing, the service backs up any existing appconfig.json at the target path.
  4. The built Artifact is written as appconfig.json (or the configured filename) at the target path.
  5. ntkDrive on each managed device reads the updated file the next time it polls its configuration path.

UNC path reachability can be verified proactively using the Test Connectivity action in the Device Groups screen before initiating deployment.


Deployment Verification: Preflight, Connectivity Gate, and Snapshots

Before any artifact is written to the network, ntkDeploy enforces a multi-layer verification sequence:

Check What it verifies Gate behaviour
Connectivity Gate Policy Manager /capabilities and /readyz both respond with required flags set Blocks deployment if either fails
Ownership Mappings Every UNC path in the target device group has a deviceKey → Peer ID mapping Blocks deployment if any path is unassigned
Preflight API call preflightBulkVerify returns a clean plan with no missing-plan actions Blocks deployment if actions are outstanding (operator must confirm)
Snapshot retrieval snapshotResolve + snapshotGet return a deterministic snapshot Blocks deployment if snapshot cannot be obtained

All four checks must pass before the deployment wizard advances to the final deploy step. This fail-closed design ensures that no partially-verified configuration reaches devices.


Feature Modules

The app is organized into feature modules, each responsible for a distinct area of functionality:

Module Responsibility
shell App navigation, environment sidebar, global search, header policy status badge
dashboard Statistics overview, quick actions, recent activity feed
profiles Schema-driven profile creation, editing, versioning, import/export, priority reordering
device_groups Device group management, UNC path configuration, connectivity testing, ownership assignment
assignments Deployment wizard, assignment creation, rollout monitoring
rollout Rollout status tracking per assignment
policy Policy Manager integration, ABAC policies, people management, enrollment queue
audit Audit log viewer with filtering and export
settings Application settings — server endpoints, cert/key paths, UI theme

Next Steps