News

Building the DataBridge in C#

DataBridge was developed as a reusable synchronization platform for moving business data from a local administrative environment into remote systems such as HTTP services, Microsoft SQL Server, and MySQL. The project was not designed as a one-off export utility. It was built as a durable bridge between very different technical environments, with the flexibility to keep working even when the exact data shape is not known in advance.

Building the DataBridge in C#

DataBridge: A Universal Synchronization Layer Built Around Dynamic JSON and ExpandoObject Payloads

DataBridge was developed as a reusable synchronization platform for moving business data from a local administrative environment into remote systems such as HTTP services, Microsoft SQL Server, and MySQL. The project was not designed as a one-off export utility. It was built as a durable bridge between very different technical environments, with the flexibility to keep working even when the exact data shape is not known in advance.

One of the most important technical decisions inside this project was the use of a dynamic payload model based on dictionaries, JSON serialization, and ExpandoObject instances created through the internal ObjectFactory. That design made the bridge far more universal than a traditional integration that depends on rigid DTO classes for every table and every record type.

Project Context

In many business environments, the source system contains valuable operational data but is not a suitable direct backend for web applications, portals, dashboards, or integrations. Legacy systems often have connectivity limitations, table structures that are not designed for modern applications, and deployment constraints that make direct exposure risky.

DataBridge solves that by acting as a translation and synchronization layer. It reads source data, understands table structures, prepares remote tables, tracks which records must be synchronized, and sends the data onward to the configured target. Depending on the installation, that target can be an HTTP-based bridge endpoint, Microsoft SQL Server, or MySQL.

What makes the implementation especially strong is that the bridge does not require a hardcoded class model for every possible record shape. Instead, it can package records dynamically and move them across the bridge in a structure that remains flexible until the receiving side decides how to persist it.

The Core Challenge: Moving Data Without Hardcoding Every Schema

Traditional integrations often become brittle because they expect a fixed object model. Every table, every field list, and every payload type must be modeled in code ahead of time. That approach works for small systems, but it becomes expensive when the integration must support many tables, evolving structures, different administrations, and multiple remote backends.

DataBridge takes a more universal approach. During synchronization, source rows are first collected into a Dictionary<string, object>. Each row becomes a dynamic field map rather than a rigid class instance. Strings are cleaned, dates are normalized into transport-safe string values, booleans are preserved, numeric values are converted consistently, and administrative context such as ADMINCODE is added before the record is sent.

That intermediate dictionary structure is the key. It allows the bridge to carry data whose final shape is determined by metadata and runtime configuration instead of compile-time assumptions. In other words: DataBridge can transport records it does not need to fully “know” as strongly typed C# models.

The Role of ObjectFactory

The internal ObjectFactory is where this flexibility becomes practical. Its CreateInstance method takes a Dictionary<string, object> and converts it into a dynamic ExpandoObject. Every key-value pair from the dictionary is copied into the expandable object at runtime.

This is a deceptively simple piece of infrastructure, but it has a major architectural effect. Once the payload becomes an ExpandoObject, DataBridge can place it into request objects without needing a separate compiled class for each table or operation. The request model keeps fields such as data and parameters as generic object properties, and the bridge serializes the full request to JSON using Newtonsoft.Json.

That means DataBridge can package:

  • Dynamic record collections for synchronization batches.
  • Runtime-generated parameter payloads for operations such as remote table creation.
  • Schema-related metadata that may differ per table or per administration.
  • Structures that are not practical to lock into a static class hierarchy.

In practical terms, ObjectFactory turns a simple dictionary into a transport-ready object graph that behaves like native JSON. That makes the bridge adaptable without making the codebase chaotic.

Why ExpandoObject Matters Here

The use of ExpandoObject is not just a convenience feature. It is central to how DataBridge stays universal.

An ExpandoObject allows properties to exist dynamically at runtime. This fits perfectly with synchronization work, where the bridge may be moving records from tables with very different field sets. Instead of maintaining dozens or hundreds of dedicated payload models, the bridge can construct the data shape from the actual row content and send it onward immediately.

This gives DataBridge several important advantages:

  • It can support a broad range of tables without repetitive model maintenance.
  • It can carry fields discovered from database structure rather than fields predefined by source code.
  • It remains easier to extend when new tables or variants are introduced.
  • It can use the same request pipeline for different operations and different transport targets.

This is exactly why the bridge is truly universal in its payload handling. The transport format is flexible enough to accommodate unknown or evolving JSON structures while still remaining structured enough for controlled processing on the receiving side.

How the Dynamic Payload Moves Through the Bridge

The inner flow of DataBridge can be understood as a staged transformation pipeline.

  1. Source rows are read from the local administrative database.
  2. Each row is converted into a dictionary of field names and values.
  3. The dictionary is normalized so that strings, dates, booleans, and numeric values are safe and consistent for transport.
  4. ObjectFactory.CreateInstance converts that dictionary into an ExpandoObject.
  5. The dynamic object is added to a request payload that also contains table name, administration, primary key, and structure metadata.
  6. The request is serialized to JSON and sent to the configured backend.
  7. The receiving backend interprets that dynamic payload and turns it into SQL insert, update, or delete operations.

Because the request model stores data and parameters as generic objects, the same transport envelope can be reused for many bridge actions. That includes record synchronization, table creation, and table removal. The implementation avoids over-specialization and keeps the bridge logic consistent.

Unknown JSON In, Usable Data Out

A particularly strong aspect of this design is that DataBridge is comfortable handling JSON-like structures even when the exact property layout is only known at runtime. The bridge builds those structures from dictionaries, serializes them without demanding a strict class contract, and then reconstructs them at the other end into workable key-value collections again.

On the receiving side, both the Microsoft SQL Server and MySQL paths convert the dynamic record payload back into a dictionary form so the SQL generation logic can work with it generically. That means the middle of the bridge can stay flexible, while the persistence layer still has precise control over field names, value formatting, primary key checks, and insert-versus-update decisions.

This is the important architectural balance: DataBridge is dynamic in transport, but deliberate in execution.

Universal by Design, Not by Marketing

Calling a platform “universal” only means something if the internals support that claim. In DataBridge, that universality is visible in the implementation:

  • The same request structure can target HTTP, Microsoft SQL Server, or MySQL.
  • The same payload strategy can represent records from different tables without dedicated model classes.
  • The same dynamic object construction is reused for both data batches and operational parameters.
  • The same synchronization engine can work across different administrations and deployment models.

This is what makes the ObjectFactory and ExpandoObject strategy so important. It removes unnecessary coupling between the source schema and the transport schema. Instead of forcing the whole bridge to change whenever the data shape changes, the bridge can continue operating on a dynamic but controlled representation of the payload.

Operational Benefits of the Dynamic Approach

The flexible JSON handling is not just an engineering preference. It creates real operational advantages.

  • New or changed fields can be accommodated with much less refactoring.
  • The bridge can scale to more tables and more client-specific variations without exploding the class model.
  • Backend selection remains a configuration concern rather than a payload redesign problem.
  • The transport layer stays reusable across synchronization features.
  • The project remains easier to maintain because the bridge logic is centered on structure discovery and transformation rather than endless object definitions.

For a system whose job is to connect older business software to modern platforms, this matters a lot. Integration software has to absorb variation. DataBridge was designed to do exactly that.

More Than a Data Transfer Tool

Beyond the dynamic JSON strategy, DataBridge also includes the broader mechanics required for dependable synchronization. It discovers table definitions, identifies primary keys, prepares remote structures, tracks synchronization work, and sends records in batches. It can support single-database deployments or separate remote databases per administration. In other words, the project combines runtime flexibility with practical synchronization discipline.

The result is not just a connector, but a middleware platform that helps legacy administrative data become usable for websites, portals, reporting environments, and custom business applications.

Conclusion

The most distinctive part of DataBridge is the way it handles data that does not have to be fully hardcoded in advance. By building records as dictionaries, converting them through ObjectFactory into ExpandoObject instances, serializing them as JSON, and then reconstructing them generically on the receiving side, the bridge gains a level of adaptability that many traditional integrations lack.

That flexible handling of unknown JSON is what makes the bridge truly universal. It allows DataBridge to sit between very different systems, keep the transport layer lightweight and dynamic, and still produce reliable, controlled results in the target environment. For an integration platform intended to survive schema variation, multiple backends, and evolving business requirements, that is one of its strongest architectural decisions.

Back to news
Share:

More

Related Articles

Want to work with us?

Get in touch and let's discuss your project.