Loading...

INTERNATIONAL JOURNAL OF SURGICAL PROCEDURES (ISSN:2517-7354)

100 Years BAUHAUS Dessau/Germany: What designers of clinical stu-dies can learn from designers and architects of BAUHAUS Dessau and hochschule für gestaltung ulm/Germany

Franz Porzsolt1*, Karl-Walter Jauch2, Robert M. Kaplan3

1 Institute of Clinical Economics (ICE) e.V, 89081 Ulm, Germany
2 University Hospital,  Ludwig-Maximilian University Munich, 81377 Munich, Germany
3 Clinical Excellence Research Center, Stanford University, Stanford CA 94305-6015, United States

CitationCitation COPIED

Porzsolt F, Jauch KW, Kaplan RM. 100 Years BAUHAUS Dessau/Germany: What designers of clinical studies can learn from designers and architects of BAUHAUS Dessau and hochschule für gestaltung ulm/Germany. Int J SurgProced. 2020 Mar;3(1):133.

© 2020 Porzsolt F, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 international License, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

Abstract

The activities on occasion of the 100th anniversary of the BAUHAUS Dessau reminded us of interesting parallels in challenges to architects and designers of buildings and designers of clinical studies. The designers of buildings knew the solutions to these challenges more than 50 years before the designers of clinical studies recognized the challenges of clinical trials. This knowledge advantage at BAUHAUS Dessau and hochschule für gestaltung ulm Ulm (founded eight years after world war II as contribution to dealing with the past) can contribute to solve existing scientific and societal challenges in healthcare: the assessment of REAL-WORLD EFFECTIVENESS (RWE). Valid data on RWE will reduce healthcare overuse and overtreatment and provide a new quality of evidence to clinical guidelines, and court decisions in contrast to experimental efficacy data that can confirm a proof of principle but not strategies for solving real world challenges. Our examples call for artificial intelligence to identify necessary knowledge, locate existing knowledge, and link both.

“Problems cannot be solved with the same mind-set that created them” Albert Einstein *1879 Ulm/Germany

Introduction

One hundred years ago, the BAUHAUS school for architecture and design was founded in Dessau, Germany, based on five features: Form Follows Function, True Materials, Minimalist Style, Gesamtkunstwerk (synthesis of different art forms), and Uniting Art and Technology [1]. Two additional and important aspects were added by Gropius and many other BAUHAUS architects. Gropius wanted to connect arts not only with industry and technology, but also with science. A larger group of BAUHAUS architects developed the first industrialized residential buildings to house less-privileged community members. These seven aspects, five related to design, one to science, and another to social affairs, represent the concept of BAUHAUS in Dessau (founded in 1919) and of hochschule für gestaltung (hfg) ulm (founded in 1953).

The essential aspects of the philosophy of the hochschule für gestaltung (hfg) ulm were summarized by Meister & Meister-Caliber in their comprehensive hfg documentation [2]. The hfg buildings express an approach and a program culminating in a manifesto. Max Bill was a Swiss architect, co-founder and first rector of the hfg ulm introduced the ‘dynamic relationship between function and form’ [3]. Bill had decided not to create spectacular buildings, but to construct cost-efficient buildings without unnecessary expenditure. This moral principle inevitably leads to an ‘aesthetic of the useful’ [2]. This simple and clearstructured architectural philosophy has influenced our concept of clinical economics.

Only a few years after the BAUHAUS concept was introduced, the details of designing a clinical trial were described by Sir Archie Cochrane and Sir Austin Bradford Hill [4]. These pioneers of evidence-based medicine posed three elementary questions that should be answered before a new intervention can be implemented in everyday clinical practice: Can it work? Does it work? Is it worth it?

The strategy and tools for providing the answer to the first question soon became available. Sir Ronald A. Fisher and Sir Austin Bradford Hill established the principle of a Randomized Controlled Trial (RCT) that can answer the first question ‘Can it work’. The answer to the first question confirms the ‘proof of a principle’ (efficacy), which reflects the result of a ‘laboratory experiment’. However, an RCT can never answer the second question, ‘Does it work’. A laboratory experiment describes the effects under Experimental Study Conditions (ESC) that significantly differ from Real World Conditions (RWC). The effect under RWC is called effectiveness’ or, for reasons of clarity, ‘Real World Effectiveness (RWE)’. The RWE confirms effects – in contrast to the proof of principle – that are generated in day-to-day clinical practice under everyday RWC. The key message of our contribution is that the RWE can neither be detected under the same conditions, nor with the same tools as effects that are generated under ESC. The third Cochrane-Hill question, ‘Is it worth it?’, is equally important. The answer to this question, the description of the perceived value, cannot yet be provided by a machine. The perception of a value is subjective and reflects the attitude, skills, and knowledge of the individual person who is asked to describe the value of something. Michael Drummond (Centre for Health Economics, University of York/UK) and his international colleagues offer several approaches to describe the subjective value of healthcare from individual patient (e.g. discomfort, inconvenience, fear, duration) or societal (e.g. cost vs. benefit) perspectives [5,6]. Our proposed strategy to operationalize the Cochrane-Hill concept is summarized in Table 1.

The three aims of our paper are first to demonstrate that both the designers of buildings and of clinical trials have to solve similar challenges, both strive for new solutions and improvements, but start each with different objectives and generate each diverse solutions, the architects in buildings and the clinical researchers in studies. Second, we also demonstrate that the solutions of the BAUHAUS designers were available 90 years, and of the hfg designers 50 years before the clinical researchers identified the important challenges of clinical studies. Third, we suggest the designers of clinical studies can learn from the experience and the lessons of the designers of buildings how to avoid mistakes. Here are the recommendations.

“Form follows function” 

The American architect Louis Sullivan introduced the term and concept ‘Form Follows Function’ in 1896 [6]. This original version of the concept means function clearly comes first, and the form can only follow once the function has been identified. Neither the form nor the function of an action or product can be selected without knowing its objective. The appropriate sequence in designing a building or a clinical trial is: first, define the objective; second, select the function of the objective: and third, choose the form of the objective. The results will be completely different when the objective is to build the most artistic building or the most practical building. Some compromises in functions are probably unavoidable when designing an aesthetic building. These functional compromises are unacceptable when the primary objective is the function of the building.

Most clinicians are convinced that the Randomized Controlled Trail (RCT) completed under Experimental Study Conditions (ESC) is the most credible method for evaluating the benefit of a medical intervention. RCTs are a particular form of clinical research that can fulfill a highly specific function, i.e. to confirm the proof of principle. It is impossible to use an RCT to predict the probability of a successful treatment under Real World Conditions (RWC) because an RCT must be performed under strictly-controlled experimental conditions that differ in many ways from the ‘natural chaos’ of everyday patient care under RWC. Examples are described in Supplement I.

It is difficult for a non-clinician to understand the subjective feelings associated with the perceived ‘natural chaos of RWC’. Under RWC the clinician has to consider all the individual patient’s existing risk factors before selecting the best possible treatment. This selection will be influenced by the results of the medical history, clinical examination, laboratory tests, and imaging technologies. Investigators who are usually not aware of this important investigative pre-treatment phase of medical management perceive the resulting heterogeneity of patient characteristics and the large choice of interventions as ‘natural chaos’, which must and can be resolved by combining the intended treatment endpoints with the individual patient risks that are related to each of the selected endpoints [7-9]. Potential confusion about the application of the appropriate type of clinical study can be avoided when the first decision in designing a new trial addresses the purpose (function) of the study and the second decision addresses the design (form) of a trial. ‘Form Follows Function’ is a helpful memory aid. In summary, the first step in designing a clinical trial is a description of the function. The function of a building corresponds to the general hypothesis of a clinical trial, i.e. the assessment of either efficacy or effectiveness or value of an intervention. All other variables of a clinical trial depend on the general hypothesis. These dependent variables describe a] the detailed study objectives, such as demonstration of superiority, equivalence, or non-inferiority related to specified endpoints; b] the details of the statistical tests that have to be (but are not always) concordant with the study objectives, such as a one- or two-sided test, and the estimation of the type I and type II errors; and, finally c] the translation of the detailed results into plain language [10]. More recently, one additional step was added: our solution to answer a scientific question cannot be presented to a machine in plain language. It has to be translated in mathematical equations. Otherwise the computer will not understand what to do. The help of the computer is needed for two reasons: the growing data volume and the growing complexity of the solutions.

 Attitude “true materials”

According to Bauhaus and hfg designers, the elements (materials) used for construction are integral to the purpose and function of the building. If we strictly apply this requirement to the design of clinical trials, we must demand that the real nature of the constructed objects be preserved. We consider the real nature of clinical trials is either explanatory or pragmatic, but nothing in between. Our conviction is justified by the difference between efficacy and effectiveness. We may complete an explanatory (experimental) trial to describe the efficacy of an intervention or complete a pragmatic (descriptive) trial to describe the effectiveness of an intervention. The postulated continuum between an explanatory and a pragmatic trial is only a continuum of forms, but not of functions [11-15]. When clinical studies are designed, a continuum of functions is rarely considered, and a continuum of forms can be an artefact produced by mixing elements of an explanatory study (e.g. randomization or the definition of exclusion criteria) with elements of a pragmatic trial (e.g. the description of Real-World Effectiveness, RWE). A pragmatic study should not have exclusion criteria because all patients who meet the inclusion criteria - regardless of co-treatments or co-morbidities – are members of a study that aims to describe Real World Conditions (RWC).

RCTs usually have a sampling bias that is caused by the undisclosed reasons for selecting some potential participants and excluding others. Such data were recently demonstrated in surgical studies. The most frequent reasons were preference for one form of treatment, dislike of the random allocation, and the potentially increased demand of patients for technical support [16].

The conclusion is to use ‘true materials’ for construction of both buildings and clinical trials. Examples of true materials in clinical trials are the RCTs or PCTs that assess either the dimension of efficacy or effectiveness. The literature includes several variations of RCTs, such as the Minimization trial, the Crossover study, the N-ofone-study, the Factorial Design, the Cluster Randomized Study, the Adaptive Design, the Platform study, and the ‘so-called’ Pragmatic trial [17]. It is neither clear whether these variations in RCTs evaluate exactly the same or different effects that are assessed by traditional RCTs nor whether any of these RCT variants can assess the RWE [18].

Traditional RCTs can provide high-level evidence, but only under the condition that all five forms of bias are excluded: the sampling bias [19], the selection, performance, attrition, and detection bias [20]. Supplement I shows some additional forms of ‘tacit bias’ that exist in almost all RCTs, but are usually not considered when discussing RCT results.

 “Minimalist style”

The minimalist style of Bauhaus art, architecture, and design reflected these ideas of functionality and true materials. Influenced by movements, such as Modernism and DeStijl (a Dutch group of painters and architects), Bauhaus artists favored linear and geometrical forms, avoiding floral or curvilinear shapes. Only line, shape, and colors mattered; anything else was unnecessary and could be reduced or eliminated.

The consistent application of this “minimalist attitude” can influence the design of clinical trials because the introduction of any criterion into a study design that is not absolutely necessary for the function will influence the study outcome. Adding other criteria to a study design requires consideration of the intended and the unintended consequences. Again, remember Form Follows Function.

“Gesamtkunstwerk”

Walter Gropius, the founder of Bauhaus, was the first to apply the notion of a ‘Gesamtkunstwerk’, which combines multiple art forms, such as fine and decorative arts unified through architecture. According to the Bauhaus philosophy, a building was not an empty carcass, but just one part of the design, and everything inside added to the overall concept. This concept is not restricted to architecture and buildings; it also expresses a general principle. The terminology demonstrates the significance of this rule for clinical trials. When inconsistencies exist in the terminology that is used to describe the form and/or the function of a clinical trial, the result will cause more confusion than clarity [21]. 

“Uniting art and technology” in medical oncology practice

In 1923, Bauhaus organized an exhibition called ‘Art & Technology: A New Unity’ that shifted the Bauhaus ideology by introducing a new emphasis on technology. Bauhaus workshops were used as laboratories in which prototypes of products suitable for mass production and typical for their time were carefully developed and improved. The artists embraced the new possibilities of modern technologies. Bauhaus was a revolutionary movement, changing art, design, and architecture forever. Important Bauhaus objects can be still found on the market today, whether you are looking for the famous Wassily chair by Marcel Breuer, the Barcelona chair by Ludwig Mies van der Rohe, or Josef Hartwig’s iconic Chess Piece. Bauhaus objects, as important pieces of art history, still look surprisingly contemporary today.

Our private Institute of Clinical Economics (ICE) e.V. in Ulm became fascinated by the BAUHAUS ideology when one of us (FP) felt the need to escape from the “autistic undisciplined way of thinking in medicine [22]”. After 25 years in general medicine and medical oncology practice, he joined the group of Horst Kächele, chair of psychotherapy and psychosomatic medicine at University of Ulm [1997-2009]. This cooperation resulted in five common publications that could only have been written from the outside perspective of somatic medicine [“English titles of German publications”]: “The front and back sides of Evidence-based Medicine [EbM]” [23,24], “The crucial question [Gretchen-question] in medicine” [25], “Placebo surgery at the intersection between the German ‘Evidenz’ and the English ‘evidence’ terms” [25], “Medical progress and the economy of care” [26], and “Learning EbM from medical students” [27].

Kächele’s department was located in the buildings of the hochschule für gestaltung (hfg ulm) next to Hans (Nick)Roericht’s team of product designers. Roericht was teaching, practicing, and developing the hfg philosophy. Roericht and Kächele were the networkers in Ulm who brought together design, product development, and medicine.

“Bridging art & technology with science”

100 years ago, the BAUHAUS principles of art & technology were already well known. Connecting BAUHAUS principles with scientific principles was a visionary idea because the principles of health science were far less developed than the BAUHAUS principles at that time. From our perspective 100 years later, we conclude that these two important societal developments, the BAUHAUS concept and the three Cochrane-Hill questions (Can it work? Does it work? Is it worth it?)were emerged at about the same time but obviously completely independent from each other. The designers and architects asked detailed questions related to both function and form of a building and to the relations between form and function. The designers of clinical trials attended to neither the three Cochrane-Hill questions nor to the seven BAUHAUS principles. Their attention was focused on the new type of allocation principle i.e. the randomization. Randomization was the dominant principle in clinical research for almost 80 years. The attempts to discuss pros and cons of the randomisation rarely got traction as the RCT gained status as the Gold Standard of clinical trials. Other research design issues got little attention. The slogan ‘Form Follows Function’ may have been well-known but was not really integrated in our thinking. Today we understand that this neglect is a result of our failure to resolve two open questions in healthcare research.

• First, we should recognize that efficacy and effectivenes have different functions and different forms and should be assessed under different conditions.

 • Second, we should understand that the assessment of RWE under RWC requires a new set of tools.

Over the last 50 years, the most widely-accepted evaluation of clinical trials was the pyramid of levels of evidence. If one accepts that progress will be possible even after the period of explanatory trials with the RCT as a standerd model, the next period may be a period of explanatory and pragmatic trials with the Pragmatic Controlled Trial (PCT) as an additional standard model [7-9].

“Bauhaus benefits all social classes”

The discussions in Dessau and Ulm showed it is almost impossible to simultaneously satisfy both the demands of populations with different levels of income within the same project. The best possible solution would be to develop a functioning private-public partnership. Each partner would be responsible for solving a specific problem. The word ‘partnership’ means that the separately-developed solutions would be coordinated before being implemented. Some of the designers of clinical trials would discuss the standards for assessment of efficacy and another team, the standards of effectiveness, which is more interesting and challenging.

Lessons clinical researchers can learn from BAUHAUS designers and architects

The basic message of the BAUHAUS and hfg is Form Follows Function. This message is also important for the understanding and assessment of scientific evidence in healthcare. Different types of information and different tools (RCTs, PCTs, and CEAs) are necessary to understand, define, and assess the three outcome dimensions, efficacy, effectiveness, and value. Although the assessment of all three dimensions is essential for the inclusion of new interventions into standard day-to-day care in all healthcare systems, we consider the assessment of effectiveness of RWE more important than the assessment of efficacy (the proof of principle). This is justified by the clear definitions of both functional (objective) and formal (design) differences between efficacy and effectiveness. Efficacy is assessed for the experimental generation of new knowledge. Effectiveness is assessed for the non-experimental description of the successful application of the new experimental knowledge under RWC, i.e. under standard everyday healthcare conditions.

Our group of scientists expects that the objectivity, reliability, and validity of results of experimental RCTs and of descriptive PCTs will be comparable. Practitioners’ decisions are influenced not only by formal scientific evidence, i.e. efficacy data, but also by informal personal experience, i.e. effectiveness data from the observation and perception under RWC in any doctor-patient encounter. In the future, the practitioner will be able to distinguish experimental data that describe the efficacy, i.e. confirm the proof of a principle, from pragmatic data hat describe the effectiveness, i.e. confirm the RWE.

According to Ioannidis, the probability of false research findings is high [28]. The same is probably true for the available description of RWE. Both experimental and observational methods have advantages and limitations, but not the same ones. The hierarchical model of levels and grades of evidence has to be reconsidered. The revised model should describe the objectivity, reliability, and validity of outcomes that influence clinical guidelines and court decisions. Functional instead of formal criteria should define the validity of outcomes. The functional validity of efficacy cannot be confirmed by absence of bias because ‘Absence of evidence is not evidence of absence’ [29]. The functional validity of efficacy can only be contradicted by demonstrating biased conclusions. The functional validity of effectiveness can be confirmed by demonstrating the expected effects of the investigated intervention under RWC. 

In addition to this rule for selecting the purpose and designing the form of clinical trials, it is necessary to select the appropriate language and terminology to communicate the rule to computers. There is nothing mystical about this language and terminology (artificial intelligence). If a concept can be expressed as a mathematical formula, there is a way to ‘teach’ the machine to understand the idea [30]. The problem is not artificial intelligence, we are the problem

Pythagoras’ simple equation [a2 + b2 = c2 ] exemplifies the essential requirements for communicating a rule. First, the validity of this equation has been accepted for more than 2500 years, in contrast to the mathematical equation that describes the functions of clinical trials. Second, using this equation we can clearly explain which types of information and which data have to be typed into the computer to obtain a precise answer. The problem of “big data analysis” is not only related to the management of large volumes of data; it is mainly related to the step-by-step actions that tell the computer what to do. Before discussing mathematical equations, we must agree on strategic concepts that have to be translated into a mathematical equation. Our published concept for assessment of the threedimensional assessment of efficacy, effectiveness, and value is shown in Table I. The three Cochrane-Hill questions as well as the BAUHAUS &hfg concepts served as bases for the construction of this table. A pilot test in a German journal indicated a high acceptance rate of this table [31].

Our journey from the first challenging idea to publication of the solution lasted almost 25 years and is the result of many contributions most of them not from mindsets in medicine. One of the most influential ideas came from the co-operation with the designers of the hfg ulm and the attention of the media payed to the 100th anniversary of the BAUHAUS at Dessau. We learned that the design of a clinical trial should be like the design of a building: Form should follow function” i.e. the objective of the trial should be defined first before the form i.e. the design of the trial is discussed. We now challenge our own tacit acceptance of the randomization as optimal design tool for both efficacy and effectiveness. According to the recommendations of BAUHAUS and hfg we now believe that the function of different trials must be addressed before choosing their forms. The remaining six recommendations such as True Materials, Minimalist Style, Gesamtkunstwerk (synthesis of different art forms), Uniting Art and Technology, Connection of Art and Technology, and the Provision of Solutions for the Less Privileged are equally important and may need to be elaborated in more detail.

The meeting of the two ‘unrelated’ disciplines, architecture and clinical trials, happened by chance. To increase the likelihood of such meetings Ernst Pöppel created the concept of SYNTOPY [31], a hybrid of the two ancient Greek words ‘syn’ (together) and ‘topos’ (any type of place). SYNTOPY describes a place where communicative people can come together. The function of SYNTOPY is the communication of knowledge and generation of new ideas and concepts. Artificial Intelligence is a new version of SYNTOPY as it combines both its function and form: it will be able to identify the knowledge needed at a certain place, will identify the topos where the needed knowledge is already available and link both hopefully in a cottage but not only in a cloud.


Table 1: Sir Archie Cochrane and Sir Austin Bradford Hill asked the three basic questions. Can it work? Does it work? Is it worth it?. We suggest the answers. Proof of Principle, i.e. the Efficacy, the Real- World Effectiveness, and the Subjective Value.

The answers can be assessed either under Ideal Study Conditions or under Real World Conditions from the perspectives of Clinical Research, Health Services Research or Economic Research. The three different answers provided under different conditions and from different perspectives require three different types of studies and three different types of tools for assessment of either efficacy or effectiveness or value.

Competing Interest statement

The authors declare no Competing Interest Statement.

Supplement I: Tacit Bias (TB) in Clinical Trials

Definition of ‘Tacit Bias (TB)’.

Tacit Bias is a form of bias that is usually not considered by the researcher. The sampling bias is a good example. Sampling bias is described in pragmatic trials [19] and means that patients with particular characteristics are systematically excluded from pragmatic quality trials. Unfortunately, this is only half of the truth. The sampling bias has to be considered in all trials, pragmatic and explanatory unless it is explicitly stated that all patients who met the inclusion criteria were indeed included in the trial.In other words, exclusion criteria must not be applied when the aim of a study is the description of the Real World Effectiveness [7,8,31].

As we were trained in the last 30 years to define both inclusion and exclusion criteria when designing a high-quality trial, we developed the ‘high quality study reflex’: instead of thinking about the function of our designed study we are concerned only about its form. The consequences of disregarded BAUHAUS & hfg message #1 ‘FORM FOLLOWS FUNCTION’ were

  • The huge volume of controversial literature on the significance of the results of RCTs.
  • The lack of clear functional and formal distinctions of efficacy and effectiveness
  • The likely consequence of the absent distinction of ‘eff and eff’, i.e. the unexpected low rate (15%) of concordant recommendations in international guidelines (submitted for publication in an international journal since August 2018). The editors of clinical guidelines mix up unknowingly efficacy with effectiveness. Formal Efficacy is obtained from the RCT results described in the scientific literature. Informal Effectiveness is obtained from observation of patients under RWC where often a huge gap is observed between the results that were described in RCTs and the observations made when treating patients, i.e. the efficacy-effectiveness-gap [21].

The Justification of Tacit Sampling Bias (TSB)

Most colleagues design their studies due to ‘reflective scientific behavior’. They were told the RCT provides the highest level of evidence. RCTs need the definition of inclusion and exclusion criteria and therefore almost all researchers apply inclusion and exclusion criteria for designing high quality clinical trials. Such reflex behavior also known as ‘autistic undisciplined way of thinking’ [22] can be explained and shows reproducible consequences, and may, therefore, justify the name “TSB”:

  • Sample size. The sample of patients included in an RCT has to be large enough to guarantee a fairly similar distribution of all known and unknown factors that can influence the outcome of an RCT. Many authors of small RCTs seem to ignore this essential condition.The “Table number 1” in most clinical studies describe the distribution of patient characteristics within the investigated groups but the number of individuals in most of these groups are too small to detect significant differences in the risk profiles of the compared patient groups. In other words, the information provided in “Table 1” often lacks validity (it cannot confirm what it pretends to confirm).
  • Exclusion criteria.The more exclusion criteria contributed to the selection of a highly homogeneous study population the higher will be the chance to get a fairly equal distribution of confounders to all study groups. On the other hand, the results of studies that include only a highly selected group of patients are applicable only to patients who are comparable to this highly selected group i.e. the external validity of the results on highly selected study groups is always low.
  • Strong preferences of doctors and patients. In an experimental (synonym: explanatory) study such as an RCT the allocation of patients to a particular intervention is always made by the investigator usually by random allocation [33-35]. This allocation should (theoretically!) be influenced neither by the patient nor doctor. The RWCs demonstrate that both (2): patients and doctors will always influence the result of an RCT by their existing expectations or preferences that can be strong or weak.
  • Strong expectations / preferences cause a sampling bias not only in observational studies (1) but also in RCTs (2):patients will not get offered the participation in a particular RCT if the doctor has a strong expectation i.e. preference for one of the investigated treatment options. Patients with a strong preference for one of the treatment options in an RCT definitely want to get the individually preferred treatment and will resist random allocation. This means doctors and patients with strong preferences for one of the investigated options in an RCT cause a biased selection of patients.
  • Weak preferences do not cause a selection bias but influence the result of RCTs and are discussed below.

The Justification of Tacit Performance Bias

Most patients who are invited to participate in an RCT will participate despite of an existing weak preference, i.e. preference for either one or none of the investigated treatment option. We show here that weak patient preferences will cause different types of bias than strong patient bias in a not-blinded RCT. Patients with strong preferences decline the participation and contribute to a sampling bias. 

Patientswith weak preferences will receive their preferred or not preferred treatment. If they receive the preferred treatment, they will show a more favorable response than patients who will not receive their preferred treatment [36-38]. Langer, Crum, Leibowitz and others confirmed the hypothesis that the effect of the pre-existing or induced expectations on the outcome is one of two necessary conditions that cause a placebo effect [39-41]. The second condition for causing a placebo effect is the congruence of the provided information with the expected information [42,43]. Some small studies report that monetary incentives influence biologic outcomes, the effects of blinded studies are weaker than the effects of unblinded or not so blinded studies [17]and the effects of active treatment are stronger than the effects of placebo and these are stronger than the effects of no treatment’ [7, 37]. According to the tree-dimensional assessment of healthcare outcomes it can be concluded that the results of these experiments can answer the first of the Cochrane-Hill questions i.e. the proof of principle of the placebo effect. The currently existing controversy is a terminology problem.

Interpretation and Evaluation of Tacit Bias

According to the existing evidence it is likely that the placebo effects are the result of communicative signals that match the expectation of the receiver i.e. the patient. Some important conclusions may be derived:

  • If this causal relationship between information, expectation and clinical effect can be confirmed a new understanding of the effects mediated under RWC will emerge.
  • We will understand that psychologic effects are part of any doctor-patient-encounter under both RWCs and ESCs. Under ESCs these effects are not wanted and can (partially) be controlled.
  • Helpful effects of healthcare may be triggered by a physical, bio-molecular, psychological, social or unexplained processes. Considering the endpoints of care, it is challenging to quantify the single contributions of these components to the overall effect of healthcare. It may also be difficult to assess the value to patients of each of these possible healthcare contributions and hard to justify the reimbursement of costs for one but not for another contribution.
  • Placebo effects are considered unwanted confounders under ESC. Under RWC it is unlikely that placebo effects are considered unwanted effects. Most patients may prefer placebo effects instead of pharmacologic effects if both induce comparable effects on the relief of pain.

References

  1. Schipper R. 5 characteristics of bauhaus art, architecture anddesign. Last access Oct 22, 2019
  2. MeisterDP, Meister-Klaiber D. Baumonographie. Einfach komplex– max bill und die architektur der hfg ulm. Verlag Scheidegger&Spiess Zürich 2018;650 .
  3. Maldonado T. Max Bill. Editorial nueva visión, Buenos Aires 1955.
  4. Haynes, B. Can it work? Does it work? Is it worth it? The testing ofhealthcare interventions is evolving. BMJ. 1999 Sep;319, 652-653.
  5. DrummondM, O’Brien B, Stoddard G.L, Torrance GW. Methods forthe economic evaluation of health-care programmes. Oxford, U.K.,Oxford University Press, 1997;pp 305.
  6. Sullivan LH. The tall office building artistically considered.Lippincott’s Magazine 1896;57: 403-409. Reprinted in InlandArchitect and News Record 27 (May 1896), pp. 32-34; WesternArchitect 31 (January 1922), pp. 3-11; published as “Form andFunction Artistically Considered” The Craftsman 8 (July 1905),pp. 453-58.
  7. Porzsolt F,Eisemann M,Habs M, Wyer P. Form follows function:Pragmatic Controlled Trials (PCTs) have to answer differentquestions and require different designs than RandomizedControlled Trials (RCTs). J Publ Health. 2013 Jun;21:307-313.
  8. Porzsolt, F et al. Efficacy and effectiveness trials have differentgoals, use different tools, and generate different messages.PragmatObs Res.2015 Nov;6:47-54.
  9. Porzsolt F. The assessments of three different dimensions“Efficacy”, “Effectiveness”, and “Value” require three differenttools: the Randomized Controlled Trial (RCT), the PragmaticControlled Trial (PCT), and the Complete Economic or CostEffectiveness Analysis (CEA).
  10. Phlippen M. Progress in evidence-based medicine by critical appraisal of evidence-based medicine: concordance of study hypothesis, objectives and statistical test in 120 publications from six international journals. Medical Thesis. University of Ulm, 2020.
  11. Zwarenstein M, TreweekS,Gagnier JG, Altman DG, Tunis S, et al.for the CONSORT and pragmatic trials in health-care (Practihc)groups. Improving the reporting of pragmatic trials: an extensionof the CONSORT statement. BMJ 2008;337:a2390.
  12. GartlehnerG, Hansen RA, NissmanD, Lohr KN, Carey TS. Criteriafor distinguishing effectiveness from efficacy trials in systematicreviews. Technical review 12 (Prepared by the RTI-International–University of North Carolina evidence-based practice center undercontract no. 290-02-0016.) AHRQ publication no. 06-0046. Rockville,MD: Agency for health-care research and quality. 2006 Apr.
  13. Thorpe KE, Zwarenstein M, Oxman AD, Treweek S, Furberg CD,et al. A pragmatic-explanatory continuum indicator summary(PRECIS): a tool to help trial designers. J ClinEpidemiol. 2009May;62:464-475.
  14. Loudon K, Zwarenstein M, Sullivan F,Donnan P, Treweek S. Makingclinical trials more relevant: improving and validating the PRECIStool for matching trial design decisions to trial purpose. Trials.2013 Apr;14:115.
  15. Loudon K, Zwarenstein M, Sullivan FM, Donnan PT, Gágyor I, etal. The PRECIS-2 tool has good interrater reliability and modestdiscriminant validity. J ClinEpidemiol. 2017 Aug;88:113-121.
  16. Abraham NS, Young JM, Solomon MJ. A systematic review ofreasons for nonentry of eligible patients into surgical randomizedcontrolled trials. Surgery. 2006 Apr;139:469-483.
  17. Lange S,Sauerland S,Lautzerberg J,Windeler J. The changeand sceintific value of randomized trials – part 24 of aseries on evaluation of scierntific publications. DtschArzteblInt.2017;114:635-640.
  18. Porzsolt F,Jauch KW. Real-world usefulness Is missing. DtschArztebl Int. 2018 Feb;115:114-115.
  19. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC,Vandenbroucke JP, et al. for the STROBE Initiative. Thestrengthening the reporting of observational studies inepidemiology (STROBE) statement: guidelines for reportingobservational studies. Bulletin of the World Health Organization.JClinEpidemiol. 2008 Apr;61(4):344-349.
  20. Jüni P, Altman DG, Egger M. Systematic reviews in healthcareassessing the quality of controlled clinical trials. BMJ.2001;323:42-46.
  21. Porzsolt F, Wiedemann F, Schmaling K, Kaplan RM. The Risk ofimprecise terminology: incongruent results of clinical trials andincongruent recommendations in clinical guidelines. J. 2019.
  22. Bleuler E. Das autistisch-undisziplinierteDenken in der Medizinund seine Überwindung [The Autistic undisciplined thinkingin medicine and how to overcome it] Translated and edited byErnest Harms Darien, Connecticut: Hafner Publishing Company.Garfield Tourney. 1973 Oct.
  23. Porzsolt F, KächeleH. Evidence-based medicine: Beschreibungder Fassade und der Rückseite des Gebäudes. (Editorial). LeberMagen Darm.Curr Opin Ophthalmol. 1999 Jun;10(3):221-226.
  24. Kächele H, Porzsolt F. Die Gretchenfrage der Medizin. Editorial. PPmP Psychother Psychosom med Psychol.1999;49:37.
  25. PorzsoltF, KächeleH. Placebo-Chirurgie am Scheideweg zwischen “Evidenz” und “evidence” (Editorial). Zentralblatt für Chirurgie.1999;124:2-3.
  26. Porzsolt F, Kächele H. Medizinischer Fortschritt undwirtschaftliche Patientenversorgung. ÖsterreichischeKrankenhauszeitung.1999;40:23-28.
  27. Porzsolt F, Finking G, Göttler S, Kumpf J, Schweiggart M, et al.Evidence-based Medicine in der Onkologie. Von Studentenlernen. Der Onkologe.2006;473-475.
  28. IoannidisJPA. Why most published research findings are false.PLoS Med. 2005Aug;2:e124.
  29. Altman DG, Bland JM. Absence of evidence is not evidence ofabsence. BMJ. 1995 Aug;311:485.
  30. Perl J, Mackenzie D. The book of why: The new science of causeand effect. 2019.
  31. Porzsolt, F. et al. Versorgungsforschung braucht dreidimensionaleStandards zur Beschreibung von Gesundheitsleistungen (Teil 2).Monitor Versorgungsforschung.2019;4:53-60.
  32. Pöppel, E. Syntopie / syntopy.
  33. Schwartz D,Lellouch J. Explanatory and pragmatic attitudes intherapeutic trials. J Chron Dis. 1967 Aug;20(8):637-648.
  34. Grimes DA, Schulz KF. An overview of clinical research: the lay ofthe land. Lancet. 2002 Jan;359(9300):57-61.
  35. ThieseMS. Observational and interventional study design types;an overview. BiochemiaMedica.2014 Jun;24(2):199-210.
  36. Porzsolt F,KumpfJ, Coppin C, Pöppel E. Stringent application of epidemiologic criteria changes the interpretation of the effects of immunotherapy in advanced renal cell cancer in Evidence-Based Oncology (ed. Wiliam, C.) 34-38 (BMJ Books, 2003).
  37. Porzsolt F, Schlotz-Gorton N, Biller-Andorno N, Thim A, MeissnerK, et al, et al. Applying evidence to support ethical decisions: is theplacebo really power-less?. SciEng Ethics.2004 Jan;10(1):119-132.
  38. Crum AJ, Langer EJ. Mind-set matters: Exercise and the placeboeffect. Psychological Science.2007 Feb;18:165-171.
  39. CrumAJ, LeibowitzKA. Verghese A. Making mindset matter.BMJ.2017 Feb;359, j5308.
  40. Leibowitz KA, Hardebeck EJ, GoyerJP, Crum, AJ. The role ofpatient beliefs in open-label placebo effects. Health Psychol.2019;38:613-622.
  41. Wall J, MhurchuCN, BlakelyT, RodgersA, Wilton, J. Effectiveness ofmonetary incentives in modifying dietary behavior: a review ofrandomized, controlled trials. Nutr Rev. 2006;64:518-531.
  42. Hróbjartsson A, EmanuelssonF, Skou ThomsenAS, HildenJ,Brorson S, et al. Bias due to lack of patient blinding in clinicaltrials. A systematic review of trials randomizing patients to blindand nonblind sub-studies. Int J Epidemiol. 2014 Aug 43(4):1272-1283.
  43. Colagiuri B, Sharpe L, Scott A. The Blind Leading the Not-SoBlind: A Meta-Analysis of Blinding in Pharmacological Trials forChronic Pain. J Pain. 2019 May;20(5):489-500.
  44. PorzsoltF, WilliamsAR, Kaplan RM (eds): Klinische Ökonomik. Effektivität und Effizienz von Gesundheitsleistungen [Clinical Economics. Effectiveness and Efficiency of Healthcare Services] 1-372. EcomedVerlagsgesellschaft, Landsberg/Lech/ Germany.2003.
  45. Porzsolt, F, Kaplan RM. (eds.) Optimizing Health – Improving the Value of Healthcare Delivery. 1-313 (Springer, New York. 2006.
  46. Frosch D, Porzsolt F, Heicappell R, Kleinschmidt K, Schatz M, et al.Comparison of German language versions of the QWB-SA and SF36 evaluating outcomes for patients with prostate disease. QualLife Res. 2001;10(2):165-173.
  47. Porzsolt F, Kaplan RM. CLINECS - Strategie und Taktik zum Nachweis des Nutzens von Gesundheitsleistungen aus Sicht des Patienten („Value For Patients“). Ges.polit Komment. 2005;46:17-22.
  48. Kaplan RM, Porzsolt F. The Natural History of Breast Cancer. Arch Int Med.2008;168:2302-2303.
  49. Porzsolt F,Kirner A, Kaplan RM. Predictors of successful cancerprevention programs. Recent Results Cancer Res. 2009;181: 19-31.
  50. Porzsolt F, Ghosh AK, KaplanRM. Qualitative Assessment ofInnovations in Healthcare Provision. BMC Health Serv Res.2009;9:50.

Supplement II: Acknowledgements

This work would not have been possible without the support of about 100 doctoral students and many unnamed friends and colleagues from Australia, Austria, Brazil, Canada, Germany, Norway, and the US.Essential contributions to this work for more than two decades have been made by my teachers: Hermann Heimpel, hematologist with heart and soul, a touch of cynical analyst, and a taste of medical oncologist (University of Ulm); Horst Kächele, psychoanalyst including psychosomatic medicine (University of Ulm), co-author of the first five ‘disruptive’ evidence-based publications [10, 23-26], and networker to Hans (Nick) Roericht, who is our link to the world of design & art teaching the triad ‘conception-simulation-strategy based on a functional-theory approach (product designer at hochschule für gestaltung ulm https://www. moma.org/collection/works/3562, Hochschule der Künste Berlin, industial design IV); Ernst Pöppel (Medical Psychology, Ludwig-Maximilian University Munich), neuro-physiologist and global syntopist who has opened many scientific and social doors, and co-author of the paper on the ‘power of placebo [37]’. A small group of highly-motivated participants in my course on ‘Clinical Economics’ at Universidade Federal Fluminense, Niteroí/RJ/Brazil (Natalia G. Rocha, Alxandra C. Toledo-Arruda, Tanja G. Thomaz, Cris Moraes, Tais R. Bessa-Guerra, Muricio Leão, Andre Ricardo Araujo de Silva) made considerable contributions to the development of the design of the Pragmatic Controlled Trial [8]. They consisently followed my request to repeatedly ask questions until everybody completely understood the proposed rational of the new PCT study design. Robert M. Kaplan (Research Director, Clinical Excellence Research Center (CERC), Stanford University) has supported our ideas for 20 years, co-edited books [44,45], co-authored several original publications [31,46-50], and provided editorial help in the preparation of many other manuscripts. The design of the PCT would not exist without the contributions of several members of our Institute of Clinical Economics: Oscar Kamga Wambo (Internal Medicine, MSc/Public Health, Public Health Services Berlin) introduced our team to the three pertinent questions posed by Cochrane & Bradford Hill, Manfred Weiss (Anesthesiology, University of Ulm), and Christel Weiss (Biostatistics, University of Heidelberg) were involved in the preparation of grant applicationsover many years, and Karl-Walter Jauch (CEO, University Hospitals LMU Munich) contributed two perspectives,the perspective of the clinician and of the manager of an academic hospital. These two contributions empowered our message to those who work at the coalface of health-care [18].