A learning log of replicating the Growth of a field model to the #web3 context.

In 1956 Jay W. Forrester began applying the principles of feedback and control to the study of economic and management problems. Forrester felt that work in the field was fragmented and focused on problems that would not provide the leverage required to achieve truly superior performance. Thus he pioneered the field of System Dynamics.

Since 1956 the field has grown, but more slowly than expected by most who have followed the work. In looking at the growth of the field we can generate a number of candidate hypotheses

- Growth is exponential but slow (things are good).
- There is a shortage of qualified professors (increase supply).
- There are not enough textbooks (standardize training).
- Only the simplest models can ever be widely disseminated (change goals).
- We need technology to help people understand models (improve accessibility).
- People need to be reeducated to think systemically (increase demand).
- Most people cannot build models (potential market is small).
- We need more skilled practitioners (improve practice quality).
- It is just too hard (make it easier).
- There is a need/skill mismatch (reorient training).
- We need to publicize successful applications (increase marketing).

Regardless of which hypothesis we consider, we need to think about how many people are using the technology. The process starts with the work of several people, then spreads. Holding this very simple thought we can start with:

There are two pools of people:

* Practitioners* and

**These people are all going to conferences, attending meetings and bumping into one another on the street (The pandemic has accelerated this adoption to the digital space - zoom meetings, social media, discord servers, reddits, VR and Meta verse). Since the process of adoption requires that someone instill the idea of the technology into someone else, we are interested in the frequency with which someone who is not a practitioner encounters someone who is a practitioner. Therefore we start with**

*Non Practitioners*.*with anyone, and then multiply by the fraction of people who are practitioners.*

**non practitioner contacts*** practitioner with non practitioner contacts* represents contacts between someone who is practicing and someone who is not practicing. There is a chance that, as the result of this meeting, the non practitioner will take up practice. The probability that this happens is called the

*adoption fraction*.The equations for the above model are:

adoption fraction= 0.01. Units: Dmnl(dimensions)adoptions= practitioner with non practitioner contacts * adoption fraction. Units: Person/Yearcontact rate= 100. Units: 1/Year. initial practitioners = 10. Units: Personnon practitioner contacts= Non Practitioners * contact rate. Units: Person/YearNon Practitioners= INTEG(- adoptions, 1e+007). Units: Person

Note:that we start the model with 10 million people (Non Practitioners). This is intended to represent the number of academics and skilled professionals for whom this type of work is relevant. This number is an open issue for discussion,practitioner prevalence= Practitioners/total population. Units: Dmnlpractitioner with non practitioner contacts= non practitioner contacts * practitioner prevalence. Units: Person/YearPractitioners= INTEG(adoptions, initial practitioners). Units: Persontotal population= Non Practitioners + Practitioners. Units: Person

The model is run from the year 2014to the year 2060 with TIME STEP at .125.

*[ Web3 - The term was coined in 2014 by Ethereum co-founder Gavin Wood, and the idea gained interest in 2021 from cryptocurrency enthusiasts, large technology companies, and venture capital firms. - source : wiki ]*

This model generates the following behavior:

For higher values like .01, the number of practitioners grows rapidly and saturates early as the total population adopts the method. On the other hand, for a value of .0025 the number of practitioners barely registers on this scale so we gotta run the model with a longer timeframe (2090) in the below run.

Doesn’t register at this scale for an even lower rate of 0.001 adoption rate

so we increase the timeframe

These are just to show that how we can finetune the constants to make the model fit your narrative. If we can replace the adoption rate with a lookup of the actual way ethereum unique accounts have been created we can see a clearer picture.

https://etherscan.io/chart/address

One of the most important features of exponential growth is that there is, seemingly, very little activity for a long period of time, and then an explosion.

But how about making the model more dynamic by capturing a few more parameters and states.

We have seen how important the adoption rate is: if it is too low, a technology will take so long to diffuse that it is likely to be lost in the wash of other events and technologies. The way we have modeled it, however, adoption is just a matter of picking up the tool and going to work. This is not, unfortunately, the way life works. After deciding that a technology is good and worth pursuing, it is necessary to spend time and effort to become capable enough to use the technology.

Instead of just looking at *Non Practitioners* and *Practitioners*, we can look at

*Non Practitioners**Training Practitioners**New Practitioners**Experienced Practitioners*

*Practitioners* can then be reformulated as the sum of New *Practitioners* and *Experienced Practitioners*. *Experienced Practitioners* can also provide teaching and training to speed the transition from *Training Practitioners* to *New Practitioners* to *Experienced Practitioners.*

This diagram is a little bit busier, but is the same basic structure as the first model. There are six constants that determine the speed with which people can move through training and gaining experience.

*self training time* is the time required for a person with no formal training to become sufficiently proficient to be a practitioner.

*min training time* is the time required for a person with lots of formal training to become proficient.

As *Experienced Practitioners* devote time to training, the average training time moves from *self training time*, to *min training time* according to *training productivity*.

The formulation for people becoming experienced is exactly parallel

The equations for this model are:

adoption fraction= 0.01. Units: Dmnladoptions= practitioner with non practitioner contacts * adoption fraction . Units: Person/Yearapplication fraction= INITIAL(1 - supervision fraction - training fraction). Units: Dmnlcontact rate= 100. Units: 1/YearExperienced Practitioners= INTEG(maturations,initial practitioners). Units: Persongraduations= MIN(Training Practitioners/min training time, Training Practitioners/self training time + Experienced Practitioners * training fraction * training productivity). Units: Person/Year

- Any addition of people devoted to training immediately adds to
graduationsuntil people are coming out as fast as they can be expected to at which point adding more trainers has no effect.initial practitioners= 10. Units: Personmaturations= MIN(New Practitioners/min experience time, New Practitioners/self experience time + Experienced Practitioners * supervision fraction * supervision productivity). Units: Person/Yearmin experience time= 1. Units: Yearmin training time= 0.25. Units: YearNew Practitioners= INTEG(graduations - maturations, 0). Units: Personnon practitioner contacts= Non Practitioners * contact rate. Units: Person/YearNon Practitioners= INTEG(- adoptions, 1e+007). Units: Personpractitioner prevalence= practitioners/total population. Units: Dmnlpractitioner with non practitioner contacts= non practitioner contacts * practitioner prevalence. Units: Person/Yearpractitioners= New Practitioners + Experienced Practitioners. Units: Personself experience time= 4. Units: Yearself training time= 2. Units: Yearsupervision fraction= 0. Units: Dmnlsupervision productivity= 4. Units: 1/Year

- The
supervision productivityis the number of people per year an experienced practitioner can train. Thus the units are (Person/Year)/Person or 1/Year.total population= Non Practitioners + Training Practitioners + practitioners. Units: Persontraining fraction= 0. Units: DmnlTraining Practitioners= INTEG( adoptions - graduations, 0). Units: Persontraining productivity= 20. Units: 1/Year

The model is run from the year 2014 to the year 2060 with TIME STEP at .125. If we simulate this model at the three extremes, with *application fraction = 1* (all effort is devoted to work in the field, and new practitioners must train themselves)

*supervision fraction = 1* (all effort is devoted to generating experienced practitioners) we get the following behavior:

*Similarly, we can run with training fraction = 1* (all effort is devoted to training novices)

**Observations:**

Devoting all attention to supervision or application both result in much slower growth and saturation, with the only difference being in the fraction of the people who are experienced. If experienced people spend all their time training new practitioners then a big fraction of practitioners are going to be experienced, but since experienced people do nothing but make more experienced people no useful work comes of it.

If experienced people spend all their time training novices, there is a profound effect on the growth of the field. People who express interest can quickly become proficient and start using the technology. While this is an interesting result, it also suggests a deficiency in the model. If experienced people are only doing training, then all the work being done is being done by *New Practitioners* who are not likely to perform as well as experienced practitioners.

**Quality of Work:**

Till now we have limited adoption fraction to be a constant, but in reality, it’s a function of the quality of work being done by practitioners.

The willingness of people to adopt a new technology depends on a number of things including the difficulty of learning the technology, the expected benefits, and the compatibility of the technology with existing technologies. While it is important to have lots of people espousing the value of a technology, unless the technology displays significant and valuable results, it will never take off.

We will use *quality of work* as a measure of the success of the technology and differentiate between new and experienced practitioners in determining the quality of work being done. Quality here represents the fraction of projects that are successfully implemented. Projects that lead to bad decisions, are started but abandoned, never get implemented or otherwise get off track are not successes. We will let the quality of work being done influence *adoption fraction*.

We add new variables to get *average quality* and its effect on adoption

Having all practitioners spend all their time on applications (everyone learns by doing) is now the best growth strategy, but all of the growth rates are slow relative to those of the last model. The reason is simple; when only new practitioners are doing applications the quality is low and new interest is lowered. To maximize growth in this model it is necessary to get a **balance between applications and teaching**.

If we set both *supervision fraction* and *training fraction* to 0.1 we get better results: **The point here is that as we add additional structure to the model to enhance its realism, the simple-minded strategy of training people like mad falls apart.**

We have started from a number of written hypotheses and developed a model that has helped us to explore some of these hypotheses in a unified framework.

When you get an insight from a simple model you need to stop and look around and ask yourself "is this what is happening." In some cases the answer is yes, and the model has given you a new basis for understanding reality and acting on that understanding. In this case the answer is maybe. We have seen some plausible dynamics, but done little to establish confidence that the model represents what is really happening. Unless we go further and make use of data and Reality Checks, we could end up with a model that seems plausible, but is just plain wrong.