### Scott’s Qualitative Fixed Point Technique in Complexity Analysis of Algorithms

Maria Lopez-Ramirez1, Oscar Valero2*

1 Juniper Innovating Travel Technology, Gremis Fusters, 33, Balearic Islands, Spain

2 Department of Mathematical Sciences and Informatics, University of the Balearic Islands, Ctra, Baleares, Spain

### Corresponding author

Oscar Valero
Department of Mathematical Sciences and
Informatics
University of the Balearic Islands
Ctra, Baleares, Spain
E-mail: o.valero@uib.es

• Accepted Date:27 December 2017
• Published Date:08 January 2018

DOI:   10.31021/acs.20181101

Article Type:   Research Article

Manuscript ID:   ACS-1-101

Publisher:   Boffin Access Limited.

Volume:   1.2

Journal Type:   Open Access

### Citation

Ramírez ML, Valero O. Scott’s Qualitative Fixed Point Technique in Complexity Analysis of Algorithms. Adv Comput Sci. 2018 Jan;1(2):101

### Abstract

In 1972, D.S. Scott developed a qualitative mathematical technique for modeling the meaning of recursive specifications in Deno-tational Semantics. In this paper we show that the same original Scott’s technique remains helpful for Asymptotic Complexity Analy-sis of algorithms requiring really a reduced number of hypotheses and elementary arguments. Thus, we will disclose that such a qualitative approach presents a uni ed mathematical method that is useful for Asymptotic Complexity Analysis and Denotational Semantics. More-over, we will emphasize the introduced technique applying the results to provide the asymptotic complexity (upper and lower bounds) of the running time of computing of a celebrated algorithm. 2010 AMS Classification: 47H10, 54F05, 68N30, 68Q55, 68Q25, 68W40

### Keywords

Partial Order; Scott; Kleene; Fixed Point; Denotational Specification; Algorithmic Complexity; Recurrence Equation

### Introduction

In 1972, D.S. Scott developed a mathematical technique for modeling the meaning of recursive specifications in Denotational Semantics. The programming languages used in order to implement such specifications allow, in general, the use of recursive definitions (denotational specifications) in such a way that the meaning of the aforesaid definitions is expressed in terms of its own meaning. Nowadays, the aforementioned mathematical tool is known as fixed point induction principle. Such a principle is based on fixed point theory, the Kleene’s fixed point theorem, for monotone self-mappings defined in partially ordered sets (for a detailed treatment of the topic we refer the reader to [1-4]. Concretely, Scott’s induction principle states that the meaning of a recursive specification is obtained as a fixed point of a non-recursive mapping, induced by the denotational specification, which is the supremum of the successive iterations of the aforesaid non-recursive mapping acting on a distinguished element of the model. The non-recursive mapping expresses the evolution of the program execution. Besides, the partial order encodes some computational information notion so that each successive iteration of the mapping matches up with an element of the mathematical model which is greater than (or equal to) those that are associated to the preceding steps of the program execution. Of course, it is assumed that each iteration of the program computation provides more information about the meaning of the algorithm than those executed before. Therefore, the mentioned fixed point encodes the total information about the meaning provided by the elements of the increasing sequence of successive iterations and, in addition, no more information can be extracted by the fixed point than that provided by each element of such a sequence.

The Scott’s fixed point principle have been applied successfully to model computational processes that arise in a natural way in two fields of Computer Science different from Denotational Semantics. Concretely, it has been applied to Asymptotic Complexity Analysis [5] in and to Logic Programming in [6]. In the sequel we focus our attention on the application yielded to Asymptotic Complexity Analysis. In 1995, M.P. Schellekens showed in that the seminal Scott idea of modeling the meaning of a denotational specification as, at the same time, the fixed point of a non-recursive mapping and, in addition, the supremum of the successive iterations sequence can be adapted to Asymptotic Complexity Analysis of algorithms. Thus, Schellekens developed a fixed point technique to get asymptotic upper bounds of the complexity of those algorithms whose running time of computing satisfies a recurrence equation of Divide and Conquer type in such a way that the Scott spirit was preserved. The Shellekens technique has a fundamental difference from Scott’s one. Concretely, the fixed point principle is now based on the Banach fixed point theorem instead of Kleene’s fixed point theorem. It must be pointed out that the method of Schellekens introduces a \distance” function, in fact a quasi-metric, which yields a measure of the degree of ap-proximation of the elements that make up the model and, besides, encodes at the same time the information partial order. Moreover, in contrast to Scott’s qualitative technique, the new method provides a unique fixed point of the self-mapping as the candidate running time of computing of the algorithm (the counterpart of the meaning of the denotational specification in the Scott case). The quantitative data approach of the Schellekens technique was seen as an advantage over the Scott approach and inspired many works on the mathematical foundations of Asymptotic Complexity Analysis (see, for instance, [7-16]).

Although the preceding quoted works argue that the Schellekens quantitative approach is very appropriate for Asymptotic Complexity Analysis and illustrate its strengths, the main target of this paper is to make a plea for the original Scott fixed point principle showing its utility for determining the upper and lower asymptotic bounds for those algorithms whose running time of computing fulfills a general recurrence equations. Thus, we will show that the Scott approach remains valid for both, Denotational Semantics and Asymptotic Complexity Analysis.

The remainder of the paper is organized as follows: In Section 2 we recall brie y the basic notions from asymptotic complexity of algorithms that we will play a central role in our subsequent discussion. Section 3 is devoted to showcase the utility of Kleene’s fixed point theorem, and thus the Scott fixed point approach, for obtaining the upper and lower bounds of the complexity of those algorithms whose running time satisfies a general recurrence equation. All this will be made requiring really a reduced number of hypotheses and elementary arguments. Finally, we will illustrate the developed technique applying the results to provide the asymptotic complexity (upper and lower bounds) of the running time of Quick sort.

### Preliminaries

In this section we recall, on the one hand, the celebrated Kleene fixed point theorem and the pertinent notions about partial ordered spaces that are necessary in order to state it. On the other hand, we remember the mathematical basics about Asymptotic Complexity Analysis of algorithms that will play a central role in Section 3, where we will apply the Scott methodology to determine asymptotic bounds for the running time of computing.

#### The Kleene fixed point theorem

Following [1] a partially ordered set is a pair (X,≤ X ) such X is a nonempty set X and ≤ X is a binary relation on X which fulfills for all x, y, z € X the following:

1. x ≤ x   x (reflexivity);
2. x ≤ X y and y≤ X   x⇒ x = y(anti symmetry),
3. x ≤ X y and y≤ X    (transitivity).

If (X≤ x) is a partially ordered set and Y ⊆X, then an upper bound for Y in (X≤ x) is an element x ϵ X such that y≤ X x for all y ϵ Y y≤ X x for all y ϵ Y .An element z ϵ Y is least in (Y ≤ X) provided that z ≤ x  x for all x ϵ Y. Thus, the supremum for Y in (X; ≤ X), if exists, is an element z ϵ X which is an upper bound for Y and, in addition, it is least in the set (UB(Y ); ≤ X ), where UB(Y ) = {u ϵ X : u is an upper bound for Y}. In addition, fixed x ϵ X, the sets {y ϵ X: x ≤ x y} and {y ϵ X: y ≤ x x} will be denoted by ↑≤ x x and by ↓≤ x x, respectively.

On account of [17] a partially ordered set (X; ≤ X) is said to be chain complete provided that every increasing sequence has supremum. A sequence (x n)nϵN is said to be increasing whenever x n ≤ Xxn+1 for all n ϵ N, where N denotes the positive integer numbers set. Moreover, a mapping from a partially ordered set (X; ≤ X) into itself is monotone if f(x) ≤ x f(y) whenever  x ≤ x y. Furthermore, a mapping from a partially ordered set (X; ≤ X) into itself is said to be ≤ X -continuous if the supremum of the sequence (f(x n))nϵN is f(x) for every increasing sequence (x n)nϵN whose supremum exists and is x. Observe that every X -continuous mapping is always monotone with respect to ≤ X , i.e., f(x) ≤ X f(y) whenever x ≤ X y. In the following, given a mapping from a partially ordered set (X; ≤ X) into itself, we will denote the set {x ϵ X: f(x) = x} by F ix(f).

Taking into account the above concepts, Kleene’s fixed point theorem can be stated as follows (see, for instance, [17]):

Theorem 1: Let (X; ≤ x) be a chain complete partially ordered set and let f: X →X be an ≤ x -continuous mapping. Then the following assertions hold:

1. If there exists x ϵ X such that x ≤ x f(x), then there exists x * ϵ Fi x(f) such that x * ϵ ↑≤ x   x
2. If there exists y ϵ X such that f(y) ≤ xy and y ϵ ↑≤ x   x, then x *≤ x   y
##### Fundamentals of Asymptotic Complexity Analysis

We follow [18] as main reference for Asymptotic Complexity Analysis of algorithms.

The complexity of an algorithm is determined to be the quantity of re-sources required by the algorithm to get a solution to the problem for which it has been designed. A typical resource is the running time of computing. Usually there are often a few algorithms that are able to get the solution to a fixed problem. So one of goals is to set which of them solve the problem taken lower time. With this aim, it is needed to compare their running time of computing. This is made by means of the asymptotic analysis in which the running time of an algorithm is denoted by a function T: N→]0, ∞] in such a way that T(n) matches up with the time taken by the algorithm to solve the problem, for which it has been designed, when the input data is of size n. Notice that we exclude 0 from the range of T because an algorithm always takes an amount of time in order to solve the problem for which it has been designed.

Since the running time does not only depend on the input data size n, but it depends also on the particular input and the distribution of the input data. Hence, it is usually distinguished three possible behaviors when the running time of an algorithm is discussed. These are the so-called best case, the worst case and the average case. The best case and the worst case for an input of size n are defined by the minimum and the maximum running time of computing over all inputs of size n, respectively. The average case for an input of size n is defined by the expected value or average running time of computing over all inputs of size n.

Generally, fixed an algorithm, to state the exact expression of the function which provides its running time of computing is a hard task. For this reason, in most situations the analysis is focused on bounding the running time of computing and, thus, to yield an approximation of it. Of course, to approximate the running time of computing we can consider an upper and lower bound which are given by the so-called O and asymptotic complexity classes. Next we recall such notions. To this end, from now on, ≤ will stand for the usual partial order on ]0, ∞]. In the sequel, we will denote by T the set {f: N→]0, ∞]: f is monotone with respect to and unbounded. Observe that if a function f models the running time of computing of an algorithm, then f is expected to be monotone and unbounded.

Consider two functions f, g ϵ T. Then f ϵ O(g) (f ϵ Ω (g)) if and only if there exist n0 ϵ N and c ϵ]0, ∞ [ satisfying f(n) ≤ cg(n) (cg(n)≤f(n)) for all n ϵ N with n≥ n0 . Moreover, the case in which f ϵ O(g) ∩n Ω(g) is denoted by f ϵ (g).

Clearly if f ϵ T describes the running time of computing of an algorithm under discussion, then the fact that f ϵ O (g) gives that the function g represents an asymptotic upper bound of the running time. Therefore if we do not know the exact expression of the function f, then the function g provides an approximate information of the running time of computing for each input size n, f(n), in the sense that the algorithm takes a time to process the input data of size n bounded above by the value g(n). Clearly, a similar interpretation can be given when we consider f ϵ (g).

We end the section noting that the set τ becomes a partially ordered set when we endow it with the partial order ≤τ given by f≤ τ g ⇔ f(n) ≤ g(n) for all n ϵ N.

##### The Scott fixed point technique applied to Asymptotic Complexity Analysis

In this section we show the utility of Kleene’s fixed point theorem, and thus of the Scott fixed point technique, as a mathematical tool for the analysis of the asymptotic complexity of algorithms.

Usually the analysis of the running time of computing of algorithms leads up recurrence equations on N of the following type:

(1)

where d ϵ T such that d(n) < ∞ for all n ϵ N, n0, k ϵ N and ci, aj ϵ]0, ∞ [ for all i = 1,……….,n0 and j = 1,……….,k.

However, a class of recurrence equation that differs from those of type (1) and that arise in a natural way in the analysis of the running time of computing of many Divide and Conquer algorithms are the so-called multi term master recurrences (see, for instance, [19]). The aforementioned recurrence equation is given as follows:

$T\left(n\right)=\left\{{}_{{\sum }_{i=1}^{k}{a}_{i}T\left(n/{b}_{i}+d\left(n\right)}^{{c}_{n}}$           (2)

where d ϵ T such that d(n) < ∞ for all n ϵ N, n0, k ϵ N, and ci, aj ϵ]0, ∞ [ for all i = 1,……….,n0 and j = 1,…….,k. According to the functions d in the preceding expressions are called forcing or input functions [20].

According to [20] the functions d in the preceding expressions are called forcing or input functions.

Examples of algorithms whose running time fulfills a recurrence equation of the above introduced types (either (1) or (2)) are the celebrated Quick sort (worst case), Merge sort (average case), Hanoi Towers Puzzle, Large two (average case) and Fibonacci (see, for instance, [5,18,20,21]). Notice that among the aforementioned algorithms there are recursive and non-recursive.

In the classical literature many techniques have been developed for obtaining the asymptotic bounds of those algorithms whose running time of computing satisfies the preceding recurrence equations. In general, such techniques are specific for each case under study and involve tedious and hard arguments either from mathematical induction or from calculus involving integrals or limits. A general view of the classical treatment of the topic can be found in [18,22].

In what follows we develop a method based on the Kleene fixed point theorem which allows to determine asymptotic bounds in those cases in which the running time of computing satisfies a general recurrence equation which retrieves as a particular case the recurrence equations of type (1) and (2). The new method does not intend to compete with the standard techniques to analyze the complexity of algorithms based on the classical arguments.

The authentic purpose of the novel method is to introduce a formal treatment of asymptotic complexity by means of really basic and elementary arguments which provide, in some sense, a fixed point theoretical counterpart of the classical techniques in the same way that Scott’s fixed point theorem made in Denotational Semantics (see [2-4]). Besides the new method presents the advantage, on the one hand, of avoiding to assume a “convergence condition” in the spirit of Schellekens for all functions involved, that is $\sum _{n=1}^{\infty }{2}_{}^{-n}\frac{1}{f\left(n\right)}<\infty$ (we refer the reader for a detailed treatment of the topic to [5,15]), and, on the other hand, of preserving the Scott spirit and being able to compute, in a natural way and simultaneously, both the meaning and the running time of computing of recursive algorithms.

### The technique

Let k ϵ N and let gi: N → N be an unbounded monotone function with respect to the partial order ≤ such that gi (n) < n for all i = 1,…..,k.

Fix n0 ϵ N and denote by Nno the set {n ϵ N : n > n0 }. Assume that ϕ: Nn0 ]0, ∞]k →]0, ∞] is an unbounded monotone function with respect to the partial order ≤, where ≤ is defined point-wise, i.e., (x1 .......... xk+1)≤ (y1 ,…………, yk+1) ⇔ xi ≤ yi for all i = 1,…….,k + 1.

Next fit x c1 ,………..,ck ϵ]0, ∞ [. Consider the general recurrence equations on N given for all n ϵ N by

$T\left(n\right)=\left\{\stackrel{{c}_{n}}{\Phi \left(}n,T\left({g}_{1}\left(n\right)\right),...,T\left({g}_{k}\left(n\right)\right)\right)$        (3)

Clearly the recurrence equations of type (1) and (2) are obtained as a particular case of recurrence equations (3) when $\Phi \left(n,{x}_{1},...,{x}_{k}\right)=\sum _{i=1}^{k}{a}_{i}{x}_{i}+d\left(n\right),$           (4)

for all n ϵ Nno and for all x; x1 ,……………, xk ϵ]0, ∞ [and, in addition, the implicated functions gi are chosen respectively as follows:

1. gi (n) = n-i for all n ϵ Nno and for all i = 1; : : : ; k

2. gi(n) =$T\left(n\right)=\left\{\stackrel{}{{}_{{\sum }_{i=1}^{k}{a}_{i}\left(n\right){x}_{i}+d\left(n\right)}^{{c}_{n}}}$ for all n ϵ Nn0 and for all i = 1; : : : ; k,

Another family of recurrence equations that arise in a natural way in the asymptotic analysis of algorithms is the given as follows: $T\left(n\right)=\left\{\stackrel{}{{}_{{\sum }_{i=1}^{k}{a}_{i}\left(n\right){x}_{i}+d\left(n\right)}^{{c}_{n}}}$   $\frac{ifn\le {n}_{o}}{ifn\ge {n}_{o}}$      (5)

where the constants a1 ,………..,ak have been replaced by functions ai : Nn0→]0, ∞ [ for all i = 1,………., k. Clearly, the preceding recurrences equations can be retrieved from (3) taking the function as follows:

$\Phi \left(n,{x}_{1},...,{x}_{k}\right)=\sum _{i=1}^{k}{a}_{i}\left(n\right){x}_{i}+d\left(n\right)$              (6)

for all n ϵ Nn0 and for all x; x1 ,………….,xk ϵ]0, ∞ [. Of course the one-dimensional case appears often in the literature (see [23]), i.e., when k = 1 and, hence

$T\left(n\right)=\left\{{}_{a\left(n\right)T\left(g\left(n\right)\right)+d\left(n\right)}^{{c}_{n}}$      $\frac{ifn\le {n}_{o}}{ifn\ge {n}_{o}}$             (7)

With a: Nn0→] 0, ∞ [and d ϵ An illustrative example of those algorithms whose running time of computing satisfies a recurrence of type (7) is the Quick sort (average behavior). Indeed, its running time is the solution to the recurrence equation given below (see, for instance, [20,22]):

$T\left(n\right)=\left\{\stackrel{c}{\frac{n+1}{n}}T\left(n-1\right)+2$    $\begin{array}{l}fn=1\\ ifn>1\end{array}$           (8)

With the aim of getting a technique able to provide asymptotic bounds of the running time of computing of algorithms whose analysis yields the pre-ceding recurrence equations, we first stress that every recurrence equation of type (3) has trivially always a unique solution provided the initial conditions c1 ,…………,cn and taking into account that Φ and g1 ,………,gk are functions (and thus they give a unique image for a given input). So we only need to focus our attention on how we can get a bound for such a solution without knowing its specific expression. To this end, denote by ${\tau }_{{n}_{o}c}$ the subset of ${\tau }_{{n}_{o}c}$ given by

${\tau }_{{n}_{o}c}$ = {f ϵ $\tau$ : f(n) = cn for all n ≤n0}.

Now, de ne the functional FΦ: ${\tau }_{{n}_{o}c}$${\tau }_{{n}_{o}c}$ by

${F}_{\Phi }\left(f\right)\left(n\right)=\left\{\begin{array}{l}{c}_{n}\hfill \\ \Phi \left(n,f\left({g}_{1\left(n\right)\right),...,f\left({g}_{k}}\left(n\right)\right)\right)\hfill \end{array}$            (9)

for all f ϵ ${\tau }_{{n}_{o}c}$ . It is clear that a function belonging to ${\tau }_{{n}_{o}c}$ is a solution to the recurrence equation (3) if and only if it is a fixed point of the functional FΦ

Taking into account the preceding notation we show that the fundamental assumptions that are required by the Kleene fixed point theorem, chain completeness and order-continuity, are satisfied in our approach.

Proposition 2: ( ${\tau }_{{n}_{o}c}$$\tau$ ) is chain complete.

Proof: Let (fm)m ϵ N be an increasing sequence in (${\tau }_{{n}_{o}c}$$\tau$ ) Then the function f ϵ${\tau }_{{n}_{o}c}$ given by

for all n ϵ N is the supremum of ${\left({f}_{m}\right)}_{m}{ϵ}_{N}$ in $\left({\tau }_{{n}_{o}c}\le \tau \right)$. Clearly if ${f}_{m}ϵ{\tau }_{{n}_{o}c}$

for all m ϵ N, then $fϵ{\tau }_{{n}_{o}c}$

#### Theorem 3:

Let n0 ϵ N and let Φ be the unbounded monotone function associated to (3). If there exists K ϵ]0, ∞ [such that
$\Phi \left(n,{x}_{1}+\epsilon ,...,{x}_{k}+\epsilon \right)<\Phi \left(n,{x}_{1},...,{x}_{k}\right)+{K}_{\epsilon }$          →(10)
for all n ϵ Nno is the supremum of (fm)m ϵ N. Next de ne the function f ϵ ${\tau }_{{n}_{o}c}$ by for all n ϵ N is the supremum of (fm)m ϵ N. Next de ne the function $fϵ τ n o c$ by   f m ( n ) Since Φ is monotone with respect to ≤ and fm $\left(\le \right)\tau$ f for all m ϵ N we obtain that

$\Phi \left(n,{f}_{m}\left({g}_{1}\left(n\right)\right),...,{f}_{m}\left({g}_{k}\left(n\right)\right)\right)\le \Phi \left(n,f\left({g}_{1}\left(n\right)\right),...,f\left({g}_{k}\left(n\right)\right)\right)$ for all n ϵ Nn0 . It follows that ${f}_{\Phi }\left(n\right)\le \Phi \left(n,f\left({g}_{1}\left(n\right)\right),...,f\left({g}_{k}\left(n\right)\right)\right)$      (11)

for all n ϵ Nn0 . Hence

It remains to prove that . To this end, we can assume without loss of generality that f Φ(n) < ∞ for all n ϵ Nn0, since otherwise, by inequality (11), we deduce that . Fix n ϵ Nn0 . Then, given mϵ ϵ]0, ∞ [, there exists ${m}_{\epsilon }$ ϵ N such that $f( g i (n))< f m ( g i (n))+ε$ for all i = 1,……….,k

Hence the monotony of Φ and (10) give $\Phi \left(n,f\left({g}_{1}\left(n\right)\right),...,f\left({g}_{k}\left(n\right)\right)\right)\le$ $Φ(n, f m∈ ( g 1 (n))+ε,..., f m∈ ( g k (n))+ε)<$ $Φ(n, f m∈ ( g 1 (n)),..., f m∈ ( g k (n)))+εK)≤$ $f Φ (n)+εK$

Whence we deduce that for all n ϵ Nn0. So FΦ(f) τ f Φ. Therefore we conclude that FΦ(f) = fΦ and, thus, that FΦ is ≤τ continuous.

Notice that the functions $\Phi$ given by (4) and (6) are unbounded monotone and, in addition, they satisfy the regularity condition (10). The next example proves that only the monotony of does not provide the ≤ τ -continuity of the function $f\Phi$

Example 4: Fix n0ϵ N and c1 ,……,c ϵ]0, ∞ [. Let g1, g2: N→N be two unbounded monotone functions with respect to ≤ such that gi (n) < n for all i = 1; 2. Consider the function ϕ: Nn0 X ]0, ∞ ] →]0, ∞ ] given by $Φ(n,x)={ ∞ ifx≥1 x ifx<1$ It is obvious that ϕ is monotone with respect to ≤. Moreover, ϕ does not satisfy the regularity condition (10), since there is not K ϵ]0, ∞ [such that $Φ(n,x+ε)≤Φ(n,x)+Kε$

for all n ϵ Nn0 and x, $\epsilon$ ϵ]0, τ [. Indeed, put τ = 1 and take x =1/2. Then $1 2 +K=Φ(n, 1 2 )+K≤Φ(n, 3 2 )=∞.$

A straightforward computation shows that F $\Phi$ is not ≤$\tau$ -continuous. Clearly the sequence (fm)m ϵ N in , ${\tau }_{{n}_{o}c}$ , given by

for all m ϵ N, is increasing in (${\tau }_{{n}_{o}c}\le \tau$) and its supremum is the function $fϵ{\tau }_{{n}_{o},c}$ given by

Besides, the sequence (F ϕ (fm))m ϵ N is increasing in ( and its supremum is again the function f. Nevertheless,

The next example shows that only the regularity condition (10) for is not enough in order to assure the ≤ no -continuity of the function F.

Example 5: Fix n0 ϵ N and c1,……,c o n ϵ]0, ∞ [. Let g1, g2: N→N be two unbounded monotone functions with respect to ≤ such that gi(n) < n for all i = 1; 2. Consider the function ϕ: Nn0 X]0, ∞ ]2 →]0,∞ ] given by

$Φ(n,x)={ ∞ ifx≥1 x ifx<1$

It is evident that satisfies the regularity condition (10), since $1 x 1 + x 2 +2ε < 1 x 1 + x 2 +ε$

for all n ϵ N no and for all x1, x2 , ϵ ]0, ∞ ]. Moreover, it is clear ϕ is not monotone with respect to ≤. Indeed,

$Φ(n, x 1 2 , x 2 2 )= 2 x 1 + x 2 ≥ 1 x 1 + x 2 =Φ(n, x 1 , x 2 )$

for all x1 , x2, ϵ]0, ∞ ]. It follows that Fϕ is not monotone with respect to ≤ and, thus, it is not $\le \tau$ continuous.

The next example yields that the ≤ no - continuity of Fϕ does not guarantee that the function fulfills the condition (10).

Example 6: Fix ${n}_{0}ϵN$ be two unbounded monotone functions with respect to ≤ such that gi(n) < n for all i = 1; 2. Consider the function given by

It is evident that the function Fϕ is ≤ ∞ -continuous. However, the function ϕ does not satisfy the regularity condition (10). Indeed, assume with the purpose of contradiction that the aforesaid condition is hold. Then, there exists K ϵ]0, ∞ ]such that

$Φ(n, x 1 +ε, x 2 +ε)<Φ(n, x 1 , x 2 )+εK$

for all n ϵ Nn0 and for all x1, x2, ϵ]0, ]It follows that $x 1 + x 2 +ε for all x1, x2, ϵ]0, ∞ ]. Set x1 = x2 = K/2 . Then, from the preceding inequality, we deduce that $K+ε

This is impossible. So, we conclude that does not satisfy the regularity condition (10).

Once we have guaranteed that in our framework the main components required to apply the Kleene fixed point theorem are hold, we introduce the new methodology to provide asymptotic complexity bounds for those algorithms whose running time of computing fulfills a recurrence equation of type (3).

Theorem 7: Let n0 ϵ N and let c1,…….,cn0 2]0, ∞[. Assume that Φ:Nn0 X ]0, ∞ ]k →]0; ∞ ] is an unbounded monotone function with respect to ≤ which satisfies the regularity condition (10) and that f is the solution to the recurrence equation of type (3). If there exists $gϵ{\tau }_{{n}_{o},c}$ such that $g\le {\tau }_{{n}_{o},c}{F}_{\varphi }\left(g\right),$ then $fϵ\Omega \left(g\right)$ . Moreover, if there exists $hϵ{\tau }_{{n}_{o},c}$ such that $hϵ↑{\tau }_{{n}_{o},c}$ and ${F}_{\varphi }\left(h\right)\begin{array}{c}\mathrm{sup}\\ m\in ℕ\end{array}h$ then $fϵO\left(h\right)$

Proof: First of all, note that f is the unique solution to (3) and, thus, the unique fixed point of ${F}_{\varphi }$ in ,${\tau }_{{n}_{o},c}$ . Suppose that there exists $gϵ{\tau }_{{n}_{o},c}$ such that Since $\left({\tau }_{{n}_{o},c}\le \tau \right)$ is chain complete and $\varphi$ is $\le {\tau }_{{n}_{o},c}$ - continuous we have, by assertion 1) in the statement of Kleene’s fixed point theorem (Theorem 1), that there exists a fixed point $f*ϵ{\tau }_{{n}_{o},c}$ of $F\varphi$ such that ${f}^{*}ϵ↑\le {\tau }_{{n}_{o},c}$ g and, hence, that . The fact that Fϕ admits only a unique fixed point gives that f = f* and that f ϵΩ(g).

Suppose that there exists $hϵ{\tau }_{{n}_{o},c}$ such that h* ϵ↑≤ g and F (h) ≤ h. Assertion 2) in the statement of Theorem 1 provides that f ≤ h. Therefore, f ϵ O(h).

##### An illustrative example

Let us consider Quicksort. According to its running time of computing fQ, for the average behavior, is the solution to the recurrence equation (12), i.e., to the following recurrence [22]:

where c ϵ]0, ∞[.

$T\left(n\right)=\left\{\begin{array}{l}c\hfill \\ \frac{n+1}{n}\hfill \end{array}T\left(n-1\right)+2$       $\begin{array}{l}ifn=1\\ ifn>1\end{array}$      (12)

As indicated in Subsection 3.1, the preceding recurrence equation is retrieved from (3) when we consider:

Notice that for all and $xϵ\right]0,\infty \left[$.

In the light of the preceding facts we have that all assumptions required in the statement of Theorem 7 are hold. With the aim of making clear the use of the technique yielded by the aforesaid theorem we verify that the asymptotic bounds known in the literature are retrieved from our approach. To this end, let fQ be the solution to the recurrence equation (12).

On the one hand, it is not hard to check that, given ${F}_{\Phi }\left(g\right)$$g$ fulfills the following:

$g\left(n\right)\le \left\{\begin{array}{l}c\hfill \\ 2+\frac{3}{2}C\hfill \\ \frac{c}{2}\left(n+1\right)+2\left(n+1\right){\sum }_{i=1}^{n-2}\frac{1}{i+2}\hfill \end{array}$    $\begin{array}{l}ifn=1\\ ifn=2\\ ifn\ge 3\end{array}$

Next take $fϵ{\tau }_{1,c}$ defined by $f\left(n\right)=\left\{\begin{array}{l}c\hfill \\ 2+\frac{3}{2}C\hfill \\ \frac{c}{2}\left(n+1\right)+2\left(n+1\right){\sum }_{i=1}^{n-2}\frac{1}{i+2}\hfill \end{array}$    $\begin{array}{l}ifn=1\\ ifn=2\\ ifn\ge 3\end{array}$

By Theorem 7 we deduce that fQϵƟ(f) which immediately gives the well-known asymptotic bound fQϵƟ(flog) (see, for instance, [24]), where with ${f}_{\mathrm{log}}\left(n\right)=\left\{\begin{array}{l}c\hfill \\ 2+\frac{3}{2}C\hfill \\ n\mathrm{log}\left(n\right)\hfill \end{array}$    $\begin{array}{l}ifn=1\\ ifn=2\\ ifn\ge 3\end{array}$    (13)

We end this subsection pointing out that Scott’s technique presents a unified mathematical approach useful at the same time for Asymptotic Complexity Analysis and Denotational Semantics. Thus, for instance, such a method allows to model simultaneously both, the meaning and the running time of computing of recursive algorithms using recursive denotational specifications. An easy, but representative, example to which the technique could be applied is given by an algorithm computing the factorial function by means of a recursive specification whose meaning satisfies the following de-notational specification (14) and its running time of computing fulfills the following recurrence equation (15):

$\begin{array}{l}ifn=1\\ ifn>1\end{array}$                    (14)

$T\left(n\right)=\left\{\begin{array}{l}c\hfill \\ T\left(n-1\right)+d\hfill \end{array}$        $\begin{array}{l}ifn=1\\ ifn>1\end{array}$                    (15)

### Conclusion

In 1972, D.S. Scott developed a qualitative mathematical technique for modeling the meaning of recursive algorithms in Denotational Semantics. We have shown that the same original Scott’s technique remains valid for Asymptotic Complexity Analysis of algorithms. So we have seen that such a qualitative approach presents a unified mathematical method that is useful for Asymptotic Complexity Analysis and Denotational Semantics. This fact presents an advantage over the quantitative technique, based on Banach’s fixed point theorem, introduced in 1995 by M.P. Schellekens because such a technique has been designed mainly for Asymptotic Complexity Analysis. Moreover, the use of the qualitative approach agrees with the qualitative character of the complexity analysis of algorithms in Computer Science, where to provide the asymptotic behavior of the running time is more important than to get the exact expression of the running time itself. Further-more, using the qualitative framework we avoid to require the convergence condition” assumed by the quantitative Schellekens approach whose unique purpose is to guarantee the soundness of a distance, which provides the quantitative information, but it has not a priori a computational motivation.

### Acknowledgement

This research was funded by the Spanish Ministry of Economy and Competitiveness under Grants TIN2014-56381-REDT (LODISCO) and TIN2016-81731-REDT (LODISCO II) and AEI/FEDER, UE funds.