# Workflow

SemanticModels provides the functionality for model augmentation. With SemanticModels, you treat models as symmetric monoidal categories and metaprogram on them using morphisms in their category with the CategoryTheory module.

1. Introduce a new class of models to analyze by writing a struct to represent models from class $\mathcal{C}$ along with a constructor model(::C,...) to build the struct.
2. Define a morphism from FinSet to $\mathcal{C}$ as fuction (f::FinSetMorph)(g::G) where G <: C
3. Define the disjoin union between two models of class $\mathcal{C}$ as ⊔(g::C, h::C)
4. Define models of class $\mathcal{C}$ as a decoration on a finite set, and use pushout and double pushouts to transform the model

To extend this new class of models to support composition with open systems:

1. Introduce a new class of OpenModel along with a constructor that extends OpenModel{V, C}
2. Define otimes and compose for the new OpenModel
3. Convert models of class $\mathcal{C}$ to OpenModel{V, C} with a defined domain and codomain, and do model composition

Under this workflow SemanticModels is more of a framework than a library, but it is extensible and can be used to take real world modeling methods and build a modeling framework around it, rather than building a modeling framework and then porting the models into the framework.

See the examples folder for usage of how to build model types and use transformations for common metamodeling tasks. A complete API can be found at Library Reference.

## Examples

The following examples are found in the folder SemanticModels/examples as julia files that can be viewed as notebooks with jupytext or as rendered HTML pages in the docs.

### Model Augmentation

These examples illustrate model augmentation with SemanticModels

### Algebraic Model Transformation

These examples illustrate how model transformations can be algebraic structures and how to exploit that to develop new models

### Model Synthesis

The workflow example combines agentgraft.jl and polynomial_regression.jl to build a modeling pipeline. This is the most important example for understanding the power of SemanticModels for model augmentation and synthesis.

workflow.jl

## Pre hoc vs post hoc frameworks

A normal modeling framework, is a software package that defines a set of modeling constructs for representing problems and a set of algorithms that solve those problem.

A typical modeling framework is developed when:

1. A library author (LA) decides to write a library for solving models of a specific class $\mathcal{C}$
2. LA develops a DSL for representing models in $\mathcal{C}$
3. LA develops solvers for models in $\mathcal{C}$
4. Scientist (S) uses LA's macros to write new models and pass them to the solvers
5. S publishes many great papers with the awesome framework

ModelingToolkit.jl is a framework for building DSLs for expressing mathematical models of scientific phenomena. And so you could think of it as a meta-DSL a language for describing languages that describe models. Their workflow is:

1. A library author (LA) decides to write a library for solving models of a specific class $\mathcal{C}$
2. LA develops a DSL for representing models in $\mathcal{C}$ using ModelingToolkit (MT).
3. LA develops solvers for models in $\mathcal{C}$ using the intermediate representations provided by MT.
4. Scientist (S) uses $LA$'s macros to write new models and pass them to the solvers
5. S publishes many great papers with the awesome framework

This is a great idea and I hope it succeeds because it will revolutionize how people develop scientific software and really benefit many communities.

One of the assumptions of the SemanticModels is that we can't make scientists use a modeling language. This is reasonable because the really interesting models are pushing the boundaries of the solvers and the libraries, so if you have to change the modeling language every time you add a novel model, what is the modeling language getting you?

Another key idea inspiring SemanticModels is that every software library introduces a miniature DSL for using that library. You have to set up the problem in some way, pass the parameters and options to to the solver, and then interpret the solution. These miniDSLs form through idiomatic usage instead of through an explicit representation like ModelingToolkit provides.

SemanticModels actually can address this as the inverse problem of ModelingToolkit. We are saying, given a corpus of usage for a given library, what is the implicit DSL that users have developed?

Our workflow is:

1. Identify a widely used library
2. Extend SemanticModels by implementing the couple necessary category theory functions in terms of the library
3. Build a DSL for that class of problems
4. New researchers and AI scientists can use the new DSL for representing the novel models
5. Generate new models in the DSL using transformations and augmentations that are valid in the DSL.

In this line of inquiry the DSL plays the role of the "structured semantic representation" of the model. We could use ModelingToolkit DSLs as the backend.