News

Building DataBridge: synchronising 20 independent 4D systems with a central MySQL backend

We are currently advancing the development of our DataBridge layer: a synchronisation engine that connects around 20 independent 4D installations to a central MySQL backend powering the website and related online services. The challenge is not simply moving data from A to B. The real challenge is doing it safely, repeatedly, and predictably in a live production environment where users are working, records are changing, connections can disappear, and no machine is allowed to become the source of chaos.

Building DataBridge: synchronising 20 independent 4D systems with a central MySQL backend

Building DataBridge: synchronising 20 independent 4D systems with a central MySQL backend

Some software projects are glamorous. This one is not. It lives in the background, works while everybody else sleeps, and only gets noticed when it fails. That is precisely why it matters.

We are currently advancing the development of our DataBridge layer: a synchronisation engine that connects around 20 independent 4D installations to a central MySQL backend powering the website and related online services. The challenge is not simply moving data from A to B. The real challenge is doing it safely, repeatedly, and predictably in a live production environment where users are working, records are changing, connections can disappear, and no machine is allowed to become the source of chaos.

At the centre of this work is a class that extends DBStructure. It acts as the orchestral conductor for synchronisation. It knows whether sync is active, where the remote endpoint lives, whether triggers are enabled, whether a sync cycle is already running, what the last HTTP error was, and which records have changed. It also maintains a collection of event listeners so open windows inside the 4D application can react when remote changes arrive. In other words, it is not only a transport layer. It is also the coordination layer between the database, the network and the user interface.

The architecture follows a straightforward but disciplined model. Local changes are first captured through triggers. When a record is created, modified or deleted, the trigger does not immediately try to synchronise with the remote server. Instead, it writes an entry into a local RecordSync queue. That decision is important. It decouples database editing from network communication, which means a user saving a record does not have to wait for a remote server, an HTTP call or a temporary connectivity problem. The local system remains responsive, and synchronisation becomes an asynchronous background responsibility rather than a blocking foreground concern.

Once the background sync loop wakes up, it starts by testing whether the remote endpoint is reachable. If the connection is healthy, the bridge performs two distinct phases. First it pushes local queued changes outward. Then it fetches remote changes back in. These are intentionally separate flows. One handles outbound intent from the local 4D machine; the other handles inbound state from the central backend. Keeping those responsibilities distinct makes the code easier to reason about and makes failure behaviour far easier to control.

The local-to-remote phase reads queued rows from RecordSync, ordered by sequence, builds a payload, and sends it in batches. Each payload contains the table name, record state, primary key information, and when relevant the serialised entity data itself. Deletions only need identity and state. Creations and modifications require a full object snapshot. Once a batch is acknowledged successfully, the corresponding queue entries are removed. If the send fails, they are deliberately left in place so the next cycle can retry them. This is not flashy engineering, but it is the kind that keeps systems honest.

The inbound side is equally careful. The bridge asks the server how many changes are waiting after the last known sequence number, fetches them in blocks, and replays them one by one into the local 4D datastore. Each incoming change is classified as created, modified or deleted. New records are created if they do not yet exist. Modified records are reloaded into the matching entity. Deleted records are dropped when possible. After processing, the local stored sequence number is advanced so the machine knows exactly where it is in the stream.

This sounds simple until you meet the oldest enemy of every sync engine: recursion. If a remote modification is written locally and that write triggers a new outbound change, the system can spiral into an endless echo chamber. The bridge therefore uses a triggerKey to mark records currently being applied from remote sync. When the matching local trigger fires, it recognises that the change originated from the bridge itself and ignores it. That single idea prevents the engine from talking to itself forever.

Another quiet but essential detail is locking. In a real application, users may already have a record open when a remote update arrives. Instead of forcing a save, overwriting data or crashing into a lock conflict, the bridge tries to lock the entity first. If it cannot, it logs the situation and leaves the item for a later cycle. This is a conscious design choice: defer safely rather than pretend that concurrency problems do not exist. Distributed systems punish optimism.

The implementation also puts energy into observability. The sync layer writes progress updates to a palette window, tracks processed record counts, stores HTTP errors, and logs each significant event to a synchronisation log file. Developers working on background infrastructure know this truth well: when something goes wrong at 03:12 in the morning, the difference between a five-minute fix and a four-hour investigation is usually the quality of the logging.

Beyond normal incremental sync, the bridge also includes operational tooling. It can push complete tables to the remote backend, fetch remote tables back into 4D, create remote tables, and compare local and remote structures. That makes it useful not only for day-to-day synchronisation, but also for setup, migration, diagnostics and controlled recovery scenarios. A synchronisation engine becomes far more valuable when it can help explain the system, not only move it.

For developers, this project sits in an interesting space between classic business software and distributed systems engineering. On one side there is 4D: mature, local, transactional, trusted. On the other side there is a central MySQL-backed web platform that expects consistency across many machines and many users. DataBridge exists in the narrow passage between those worlds. It translates not only data, but timing, state, failure, retries, sequence and trust.

There is still more work ahead. Synchronisation engines are never truly finished; they are refined, hardened and taught new edge cases over time. But each improvement moves the system closer to what developers actually want from infrastructure: something reliable enough to forget about. In the end, that is the highest compliment such a component can receive. Not applause, but silence.

Back to news
Share:

More

Related Articles

Want to work with us?

Get in touch and let's discuss your project.