ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu

APPROXIMATE SOLUTIONS OF RANDOMIZED NON-AUTONOMOUS COMPLETE LINEAR DIFFERENTIAL

EQUATIONS VIA PROBABILITY DENSITY FUNCTIONS

JULIA CALATAYUD, JUAN CARLOS CORT ´ES, MARC JORNET

Abstract. Solving a random differential equation means to obtain an exact or approximate expression for the solution stochastic process, and to compute its statistical properties, mainly the mean and the variance functions. However, a major challenge is the computation of the probability density function of the solution. In this article we construct reliable approximations of the probability density function to the randomized non-autonomous complete linear differen- tial equation by assuming that the diffusion coefficient and the source term are stochastic processes and the initial condition is a random variable. The key tools to construct these approximations are the random variable transforma- tion technique and Karhunen-Lo`eve expansions. The study is divided into a large number of cases with a double aim: firstly, to extend the available results in the extant literature and, secondly, to embrace as many practical situations as possible. Finally, a wide variety of numerical experiments illustrate the potentiality of our findings.

1. Introduction and motivation

The behavior of physical phenomena is often governed by chance and it does not follow strict deterministic laws. Specific examples can be found in many ar- eas. For example, in Thermodynamics, the analysis of crystal lattice dynamics in which there is a small percentage of small atoms, randomness arises since the atom masses may take two or more values according to probabilistic laws [10];

in the analysis of free vibrations in Mechanics, the heterogeneity in the material may be very complicated, so it may be more suitable to model it via appropriate random fields [31]; in epidemiology, the rate of transmission of a disease depends upon very complex factors, such as genetics, weather, geography, etc., that can be better described using adequate probabilistic distributions rather than determin- istic tools, etc. [1]. As a consequence, it is reasonable to deal with mathematical models considering randomness in their formulation. Motivated by this fact, i.e.

the random nature involved in numerous physical phenomena together with the ubiquity of deterministic differential equations to formulate classical laws in ther- modynamics, mechanics, epidemiology, etc., it is natural to randomize classical

2010Mathematics Subject Classification. 34F05, 60H35, 60H10, 65C30, 93E03.

Key words and phrases. Random non-autonomous complete linear differential equation;

random variable transformation technique; Karhunen-Lo`eve expansion;

probability density function.

c

2019 Texas State University.

Submitted July 2, 2018. Published July 16, 2019.

1

differential equations to describe mathematical models. This approach leads to the
areas of stochastic differential equations (SDEs) and random differential equations
(RDEs). In the former case (SDEs), differential equations are forced by stochastic
processes having an irregular sample behavior (e.g., nowhere differentiable) such as
the Wiener process or Brownian motion. SDEs are written as Itˆo or Stratonovich
integrals rather than in their differential form. This approach permits dealing with
uncertainty via very irregular stochastic processes like white noise, but assuming
specific probabilistic properties (Gaussianity, independent increments, stationarity,
etc.). The rigorous treatment of SDEs requires a special stochastic calculus, usually
referred to as Itˆo calculus, whose cornerstone result is Itˆo’s Lemma [21, 18]. While
RDEs are those in which random effects are directly manifested in their input data
(initial/boundary conditions, source term and coefficients) [26, 25]. An important
advantage of RDEs is that a wider range of probabilistic patterns are allowed for
input data like beta, gamma, Gaussian distributions (including Brownian motion),
but not white noise. Furthermore, the analysis of RDEs takes advantage of classical
calculus where powerful tools are available [26]. When dealing with both SDEs and
RDEs, the main goals are to compute, exact or numerically, the solution stochastic
process, sayx(t), and its main statistical functions (mostly mean,E[x(t)], and vari-
ance,V[x(t)]). A more ambitious target is to compute itsn-dimensional probability
density distribution,f_{n}(x_{1}, t_{1};. . .;x_{n}, t_{n}), whenever it exists, which gives the prob-
ability density distribution of the random vector (x(t_{1}), . . . , x(t_{n})), i.e., the joint
distribution of the solution atnarbitrary time instantst_{i}, 1≤i≤n[26, pp. 34-36].

This is generally a very difficult goal, and in practice significant efforts have been made to compute just the first probability density function,f1(x, t), since from it all one-dimensional statistical moments of the stochastic processx(t) can be derived using the fact that

E[(x(t))^{k}] =
Z ∞

−∞

x^{k}f1(x, t) dx, k= 1,2, . . . .

This permits computing the variance, skewness, kurtosis, etc., as well as determin- ing the probability that the solution lies in a specific interval of interest

P[a≤x(t)≤b] = Z b

a

f_{1}(x, t) dx,
for every time instantt.

In the context of SDEs, it is known thatf_{n}(x_{1}, t_{1};. . .;x_{n}, t_{n}) satisfies a Fokker-
Planck partial differential equation that needs to be solved, exact or numerically
[21, 18], while the random variable transformation technique [26, ch. 6], [22, ch. 5
and ch. 6] stands out as a powerful strategy to address this problem in the frame-
work of RDEs. Here, we restrict our analysis to an important class of RDEs taking
advantage of the random variable transformation technique. This method has been
successfully applied to study significant random ordinary differential equations [8, 4]

and random partial differential equations [9, 14, 16, 15, 33], that appear in different areas such as epidemiology, physics, engineering, etc. In all these interesting con- tributions, uncertainty is considered via random variables rather than stochastic processes. From a mathematical standpoint, this fact restricts the generality of previous contributions.

This article is devoted to computing approximations of the first probability den- sity function to the random non-autonomous complete linear differential equation

under very general hypotheses. As we shall see later, our approach takes advan- tage of the aforementioned random variable transformation technique together with Karhunen-Lo`eve expansion [19, ch. 5]. It is important to point out that this funda- mental problem has not been fully tackled in its general formulation yet. Indeed, to the best of our knowledge, in a first step, the autonomous case, corresponding to random first and second-order linear differential equations has been addressed in [3] and [5], respectively. In a second step, these results have been extended to random autonomous first-order linear systems in [6]. Third, new results in the spirit of this paper have been recently published for the random non-autonomous homogeneous linear differential equation in [7]. However, it must be said that be- sides restricting the analysis to the homogeneous case, which obviously limits the potentiality of the obtained results in real applications, in that contribution the theoretical analysis relies upon a number of hypotheses that will be generalized in the present paper. This is a very important novelty comparing this paper with [7].

Indeed, apart from dealing with the non-homogeneous case, in this paper we will construct reliable approximations of the first probability density function to the random non-autonomous complete linear differential equation within a fairly wide variety of scenarios covering most of the practical cases.

For the sake of clarity, first we introduce some notation and results that will be required throughout this article. We will also establish some results based upon the sample path approach to the random non-autonomous non-homogeneous linear differential equation that are aimed to complete our analysis.

Consider the non-autonomous complete linear differential equation
x^{0}(t) =a(t)x(t) +b(t), t∈[t0, T],

x(t_{0}) =x_{0}, (1.1)

where a(t) is the diffusion coefficient, b(t) is the source term and x0 is the initial condition. The formal solution to this Cauchy problem is

x(t) =x0e

Rt t0a(s) ds

+ Z t

t_{0}

b(s)e^{R}^{s}^{t}^{a(r) dr}ds, (1.2)
where the integrals are understood in the Lebesgue sense.

Now we consider (1.1) in a random setting, meaning that we are going to work on an underlying complete probability space (Ω,F,P), where Ω is the set of out- comes, that will be generically denoted byω,F is a σ-algebra of events andP is a probability measure. We shall assume that the initial conditionx0(ω) is a random variable and

a=

a(t, ω) :t0≤t≤T, ω∈Ω , b=

b(t, ω) :t0≤t≤T, ω∈Ω

are stochastic processes defined in (Ω,F,P). In this way, the formal solution to the randomized initial value problem (1.1) is given by the following stochastic process:

x(t, ω) =x_{0}(ω)e

Rt

t0a(s,ω) ds

+ Z t

t_{0}

b(s, ω)e^{R}^{s}^{t}^{a(r,ω) dr}ds, (1.3)
where the integrals are understood in the Lebesgue sense.

Notation. Throughout this paper we will work with Lebesgue spaces. Remember
that, if (S,A, µ) is a measure space, we denote by L^{p}(S) or L^{p}(S,dµ) (1≤p <∞)
the set of measurable functions f :S →Rsuch thatkfkL^{p}(S)= (R

S|f|^{p}dµ)^{1/p} <

∞. We denote by L^{∞}(S) or L^{∞}(S,dµ) the set of measurable functions such that

kfkL^{∞}(S)= inf{sup{|f(x)|:x∈S\N}:µ(N) = 0}<∞. We write a.e. as a short
notation for “almost everywhere”, which means that some property holds except
for a set of measure zero.

Here, we will deal with several cases: S = T ⊆ Rand dµ = dx the Lebesgue measure,S= Ω andµ=Pthe probability measure, andS=T ×Ω and dµ= dx×

dP. Notice thatf ∈L^{p}(T ×Ω) if and only ifkfk_{L}p(T ×Ω)= (E[R

T |f(x)|^{p}dx])^{1/p}<

∞. In the particular case ofS = Ω andµ=P, the short notation a.s. stands for

“almost surely”.

In this article, an inequality related to Lebesgue spaces will be frequently used.

This inequality is well-known as the generalized H¨older’s inequality, which says that, for any measurable functionsf1, . . . , fm,

kf1· · ·f_{m}kL^{1}(S)≤ kf1kL^{r}1(S)· · · kfmkL^{rm}(S), (1.4)
where

1 r1

+· · ·+ 1 rm

= 1, 1≤r_{1}, . . . , r_{m}≤ ∞. (1.5)
Whenm= 2, inequality (1.4)-(1.5) is simply known as H¨older’s inequality. When
m= 2,r1= 2 andr2= 2, inequality (1.4)-(1.5) is termed Cauchy-Schwarz inequal-
ity.

For the sake of completeness, we establish under which hypotheses on the data stochastic processesaandband in what sense the stochastic process given in (1.3) is a rigorous solution to the randomized problem (1.1).

To better understand the computations in the proof of Theorem 1.1, let us
recall some results that relate differentiation and Lebesgue integration. Recall that
a function f : [T1, T2] → R belongs to A.C([T1, T2]) (A.C stands for absolutely
continuous) if there exists its derivative f^{0} at a.e. x ∈ [T_{1}, T_{2}], f^{0} ∈ L^{1}([T_{1}, T_{2}])
andf(x) =f(T1) +Rx

T_{1}f^{0}(t) dt for allx∈[T1, T2] (i.e.,f satisfies the fundamental
theorem of calculus for Lebesgue integration). Equivalently, f ∈ A.C([T1, T2]) if
for all >0 there exists a δ >0 such that, if {(xk, yk)}^{m}_{k=1} is any finite collection
of disjoint open intervals in [T1, T2] withPm

k=1(yk−xk)< δ, thenPm

k=1|f(yk)−
f(xk)| < . Equivalently, f ∈ A.C([T1, T2]) if there exists g ∈ L^{1}([T1, T2]) such
that f(x) =f(T1) +Rx

T1g(t) dt for all x∈ [T1, T2]. In such a case, g =f^{0} almost
everywhere on [T1, T2]. For more details on these statements, see [2, p.129] [29,
Th.2, p.67].

Two results concerning absolute continuity will be used:

(i) Iff ∈A.C([T_{1}, T_{2}]), then e^{f} ∈A.C([T_{1}, T_{2}]) (use the Mean Value Theorem
to e^{f} and the-δ definition for absolute continuity).

(ii) The product of absolutely continuous functions is absolutely continuous.

Theorem 1.1. If the data processesaandbof the randomized initial value problem
(1.1) satisfy a(·, ω), b(·, ω) ∈ L^{1}([t0, T]) for a.e. ω ∈ Ω, then the solution process
x(t, ω) given in (1.3)has absolutely continuous sample paths on [t0, T],x(t0, ω) =
x0(ω)a.s. and, for a.e. ω∈Ω,x^{0}(t, ω) =a(t, ω)x(t, ω) +b(t, ω)for a.e.t∈[t0, T].

Moreover, this is the unique absolutely continuous process being a solution.

On the other hand, if a and b have continuous sample paths on [t0, T], then
x(t, ω) has C^{1}([t0, T]) sample paths and x^{0}(t, ω) = a(t, ω)x(t, ω) +b(t, ω) for all
t∈[t_{0}, T]and a.e. ω∈Ω. Moreover, this is the unique differentiable process being
a solution.

Proof. We rewrite (1.3) as x(t, ω) = e

Rt

t0a(s,ω) dsn

x0(ω) + Z t

t0

b(s, ω)e^{−}

Rs

t0a(r,ω) dr

dso
.
Sincea(·, ω)∈L^{1}([t_{0}, T]), the functionRt

t0a(s, ω) dsbelongs to A.C([t_{0}, T]), for a.e.

ω ∈Ω. Therefore, by (i), e

R_{t}

t0a(s,ω) ds

belongs to A.C([t0, T]), for a.e. ω ∈Ω, with derivativea(t, ω)e

Rt

t0a(s,ω) ds

. On the other hand,

|b(s, ω)|e^{−}

Rs

t0a(r,ω) dr

≤ |b(s, ω)|e^{|}

Rs

t0a(r,ω) dr|

≤ |b(s, ω)|e

Rs

t0|a(r,ω)|dr

≤ |b(s, ω)|e

RT

t0|a(r,ω)|dr

=|b(s, ω)|e^{ka(·,ω)k}^{L1 ([t}^{0}^{,T])}∈L^{1}([t0, T],ds),
for a.e.ω∈Ω. Then the functionRt

t_{0}b(s, ω)e^{−}

Rs

t0a(r,ω) dr

dsbelongs to A.C([t0, T]),
for a.e.ω∈Ω, with derivativeb(t, ω)e^{−}

Rt

t0a(r,ω) dr

.

As the product of absolutely continuous functions is absolutely continuous (see
(ii)), we derive that, for a.e.ω ∈Ω,x(·, ω)∈A.C([t0, T]). Moreover, the product
rule for the derivative yields, for a.e. ω ∈Ω, x^{0}(t, ω) =a(t, ω)x(t, ω) +b(t, ω) for
a.e.t∈[t0, T].

For the uniqueness, we apply Carath´eodory’s existence theorem [11, p.30]. If a and b have continuous sample paths on [t0, T], one has to use the fundamental theorem of calculus for the Riemann integral, instead. The uniqueness comes from the global version of the Picard-Lindel¨of theorem, or, if you prefer, by standard results on the deterministic linear differential equation.

2. Obtaining the probability density function of the solution The main goal of this article is, under suitable hypotheses, to compute approx- imations of the probability density function, f1(x, t), of the solution stochastic process given in (1.3),x(t, ω), for t∈[t0, T]. To achieve this goal, we will use the Karhunen-Lo`eve expansions for both data stochastic processesaandb.

Hereinafter, the operatorsE[·],V[·] and Cov[·,·] will denote the expectation, the variance and the covariance, respectively. We state two crucial results that will be applied throughout our subsequent analysis.

Lemma 2.1(Random variable transformation technique). Let X be an absolutely
continuous random vector with density f_{X} and with support D_{X} contained in an
open set D ⊆R^{n}. Let g:D →R^{n} be a C^{1}(D) function, injective on D such that
J g(x)6= 0 for all x∈D (J stands for Jacobian). Let h=g^{−1} :g(D)→R^{n}. Let
Y =g(X)be a random vector. ThenY is absolutely continuous with density

fY(y) =

(f_{X}(h(y))|J h(y)|, y∈g(D),

0, y /∈g(D). (2.1)

The proof of the above lemma appears in [19, Lemma 4.12].

Lemma 2.2(Karhunen-Lo`eve theorem). Consider a stochastic process{X(t) :t∈
T } inL^{2}(T ×Ω). Then

X(t, ω) =µ(t) +

∞

X

j=1

√νjφj(t)ξj(ω), (2.2)

where the sum converges inL^{2}(T ×Ω),µ(t) =E[X(t)],{φj}^{∞}_{j=1} is an orthonormal
basis of L^{2}(T), {(νj, φ_{j})}^{∞}_{j=1} is the set of pairs of (nonnegative) eigenvalues and
eigenvectors of the operator

C: L^{2}(T)→L^{2}(T), Cf(t) =
Z

T

Cov[X(t), X(s)]f(s) ds, (2.3)
and{ξj}^{∞}_{j=1} is a sequence of random variables with zero expectation, unit variance
and pairwise uncorrelated. Moreover, if{X(t) :t∈ T }is a Gaussian process, then
{ξj}^{∞}_{j=1} are independent and Gaussian.

The proof of the above lemma appears in [19, Theorem 5.28].

Remark 2.3. When the operator C defined in (2.3) has only a finite number of nonzero eigenvalues, then the processX of Lemma 2.2 can be expressed as a finite sum:

X(t, ω) =µ(t) +

J

X

j=1

√νjφj(t)ξj(ω).

In the subsequent development, we will write the data stochastic processesaandb via their Karhunen-Lo`eve expansions. The summation symbol in the expansion will be always written up to∞(the most difficult case), although it could be possible that their corresponding covariance integral operatorsChave only a finite number of nonzero eigenvalues. In such a case, when we write vectors of the form (ξ1, . . . , ξN) for N ≥1 later on (for instance, see the hypotheses of the forthcoming theorems or the approximating densities (2.5), (2.16), etc.), we will interpret that we stop at N =J ifJ <∞.

Suppose thata, b∈L^{2}([t0, T]×Ω). Then, according to Lemma 2.2, we can write
their Karhunen-Lo`eve expansion as

a(t, ω) =µ_{a}(t) +

∞

X

j=1

√ν_{j}φ_{j}(t)ξ_{j}(ω), b(t, ω) =µ_{b}(t) +

∞

X

i=1

√γ_{i}ψ_{i}(t)η_{i}(ω),

where{(νj, φj)}^{∞}_{j=1} and{(γi, ψi)}^{∞}_{i=1} are the corresponding pairs of (nonnegative)
eigenvalues and eigenfunctions,{ξ_{j}}^{∞}_{j=1}are random variables with zero expectation,
unit variance and pairwise uncorrelated, and{ηi}^{∞}_{i=1}are also random variables with
zero expectation, unit variance and pairwise uncorrelated. We will assume that
both sequences of pairs {(νj, φj)}^{∞}_{j=1} and {(γi, ψi)}^{∞}_{i=1} do not have a particular
ordering. In practice, the ordering will be chosen so that the hypotheses of the
theorems stated later on are satisfied (for example, if we say in a theorem that ξ1

has to satisfy a certain condition, then we can reorder the pairs of eigenvalues and
eigenvectors and the random variablesξ_{1}, ξ_{2}, . . .so thatξ_{1}satisfies the condition).

We truncate the Karhunen-Lo`eve expansions up to an indexN andM, respec- tively:

a_{N}(t, ω) =µ_{a}(t) +

N

X

j=1

√ν_{j}φ_{j}(t)ξ_{j}(ω), b_{M}(t, ω) =µ_{b}(t) +

M

X

i=1

√γ_{i}ψ_{i}(t)η_{i}(ω).

This allows us to have a truncation for the solution given in (1.3):

x_{N,M}(t, ω) =x_{0}(ω)e

Rt

t0aN(s,ω) ds

+ Z t

t_{0}

b_{M}(s, ω)e^{R}^{s}^{t}^{a}^{N}^{(r,ω) dr}ds

=x0(ω)e

Rt t0

µa(s)+PN j=1

√νjφj(s)ξj(ω)

ds

+ Z t

t_{0}

µ_{b}(s) +

M

X

i=1

√γ_{i}ψ_{i}(s)η_{i}(ω)

e^{R}^{s}^{t} ^{µ}^{a}^{(r)+}^{P}^{N}^{j=1}^{√}^{ν}^{j}^{φ}^{j}^{(r)ξ}^{j}^{(ω)}
drds.

For convenience of notation, we will denote (in bold letters) ξ_{N} = (ξ1, . . . , ξN)
andη_{M} = (η_{1}, . . . , η_{M}), understanding this as a random vector or as a deterministic
real vector, depending on the context. We also denote

K_{a}(t,ξ_{N}) =
Z t

t_{0}

µ_{a}(s) +

N

X

j=1

√ν_{j}φ_{j}(s)ξ_{j}
ds,

Sb(s,η_{M}) =µb(s) +

M

X

i=1

√γiψi(s)ηi.

In this way,
x_{N,M}(t, ω)

=x_{0}(ω)e^{K}^{a}^{(t,ξ}^{N}^{(ω))}+
Z t

t_{0}

S_{b}(s,η_{M}(ω))e^{K}^{a}^{(t,ξ}^{N}^{(ω))−K}^{a}^{(s,ξ}^{N}^{(ω))}ds

= e^{K}^{a}^{(t,ξ}^{N}^{(ω))}n

x0(ω) + Z t

t_{0}

Sb(s,η_{M}(ω))e^{−K}^{a}^{(s,ξ}^{N}^{(ω))}dso
.

(2.4)

We assume that x0 and (ξ1, . . . , ξN, η1, . . . , ηM) are absolutely continuous and in-
dependent, for all N, M ≥1. The densities ofx0 and (ξ1, . . . , ξN, η1, . . . , ηM) will
be denoted byf0 andf(ξ_{1},...,ξ_{N},η_{1},...,η_{M}), respectively.

Under the scenario described, in the following subsections we will analyze how
to approximate the probability density function of the solution stochastic process
x(t, ω) given in (1.3). The key idea is to compute the density function of the
truncation x_{N,M}(t, ω) given in (2.4) taking advantage of Lemma 2.1, and then
proving that it converges to a density of x(t, ω). The following subsections are
divided taking into account the way Lemma 2.1 is applied. As it shall be seen later,
our approach is based upon which variable is essentially isolated when computing
the inverse of the specific transformation mapping that will be chosen to apply
Lemma 2.1. For instance, in Subsection 2.1 we will isolate the random variablex0

and in Subsection 2.3 we will isolate the random variableη1. This permits having different hypotheses under which a density function ofx(t, ω) can be approximated.

This approach allows us to achieve a lot of generality in our findings (see Theorem 2.4, Theorem 2.7, Theorem 2.9 and Theorem 2.12). We will study the homogeneous and non-homogeneous cases, corresponding tob = 0 andb 6= 0, respectively. The particular case b = 0 permits having interesting and particular results for the random non-autonomous homogeneous linear differential equation (see Subsection 2.2, Subsection 2.4 and Subsection 2.5, Theorem 2.5, Theorem 2.8, Theorem 2.10, Theorem 2.11 and Theorem 2.13).

2.1. Obtaining the density function when f0 is Lipschitz on R. Using Lemma 2.1, we are able to compute the density of xN,M(t, ω). Indeed, with the notation of Lemma 2.1,

g(x0, ξ1, . . . , ξN, η1, . . . , ηM) =

e^{K}^{a}^{(t,ξ}^{N}^{)}n
x0+

Z t
t_{0}

Sb(s,η_{M})e^{−K}^{a}^{(s,ξ}^{N}^{)}dso

,ξ_{N},η_{M}
,
D=R^{N}^{+M}^{+1}, g(D) =R^{N+M+1},

h(x_{0}, ξ_{1}, . . . , ξ_{N}, η_{1}, . . . , η_{M}) =

x_{0}e^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t_{0}

S_{b}(s,η_{M})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds,ξ_{N},η_{M}
,
J h(x_{0}, ξ_{1}, . . . , ξ_{N}, η_{1}, . . . , η_{M}) = e^{−K}^{a}^{(t,ξ}^{N}^{)}>0.

Then, taking the marginal distributions with respect to (ξ_{N},η_{M}) and denoting by
f_{1}^{N,M}(x, t) the density ofx_{N,M}(t, ω), we have

f_{1}^{N,M}(x, t) =
Z

R^{N+M}

f0

xe^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t0

Sb(s,η_{M})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds

×fξ_{N},η_{M}(ξ_{N},η_{M})e^{−K}^{a}^{(t,ξ}^{N}^{)}dξ_{N}dη_{M}.

For being able to compute the limit of f_{1}^{N,M}(x, t) when N, M → ∞ easily,
without loss of generality, we will take N =M so that we work with the density
f_{1}^{N}(x, t) ofxN,N(t, ω):

f_{1}^{N}(x, t) =
Z

R^{2N}

f_{0}

xe^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t0

S_{b}(s,η_{N})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds

×f_{ξ}_{N}_{,η}_{N}(ξ_{N},η_{N})e^{−K}^{a}^{(t,ξ}^{N}^{)}dξ_{N}dη_{N}.

(2.5)

In the next theorem we establish the hypotheses under which{f_{1}^{N}(x, t)}^{∞}_{N=1} con-
verges to a density of the solutionx(t, ω) given by (1.3).

Theorem 2.4. Assume the following four hypotheses:

(1) a, b∈L^{2}([t0, T]×Ω);

(2) x0 and(ξ1, . . . , ξN, η1, . . . , ηN) are absolutely continuous and independent, N ≥1;

(3) the density function ofx0,f0, is Lipschitz onR;

(4) there exist2≤p≤ ∞and4≤q≤ ∞such that 1/p+ 2/q= 1/2,
kµbk_{L}p(t0,T)+

∞

X

j=1

√γjkψjk_{L}p(t0,T)kηjk_{L}p(Ω)<∞,

ke^{−K}^{a}^{(t,ξ}^{N}^{)}k_{L}q(Ω)≤C, for allN ≥1 andt∈[t_{0}, T].

Then the sequence {f_{1}^{N}(x, t)}^{∞}_{N}_{=1} given in (2.5) converges in L^{∞}(J ×[t0, T]) for
every bounded setJ ⊆R, to a densityf1(x, t)of the solutionx(t, ω)given in (1.3).

Proof. We prove that{f_{1}^{N}(x, t)}^{∞}_{N}_{=1}is Cauchy in L^{∞}(J×[t0, T]) for every bounded
setJ ⊆R. Fix two indexesN > M.

First of all, note that, taking the marginal distribution,
fξ_{M},η_{M}(ξ_{M},η_{M}) =

Z

R^{2(N−M)}

fξ_{N},η_{N}(ξ_{N},η_{N}) dξM+1· · ·dξNdηM+1· · ·dηN,

so, according to (2.5) with indexM,
f_{1}^{M}(x, t) =

Z

R^{2M}

f_{0}

xe^{−K}^{a}^{(t,ξ}^{M}^{)}−
Z t

t_{0}

S_{b}(s,η_{M})e^{−K}^{a}^{(s,ξ}^{M}^{)}ds

×f_{ξ}

M,η_{M}(ξ_{M},η_{M})e^{−K}^{a}^{(t,ξ}^{M}^{)}dξ_{M}dη_{M}

= Z

R^{2N}

f0

xe^{−K}^{a}^{(t,ξ}^{M}^{)}−
Z t

t_{0}

Sb(s,η_{M})e^{−K}^{a}^{(s,ξ}^{M}^{)}ds

×fξ_{N},η_{N}(ξ_{N},η_{N})e^{−K}^{a}^{(t,ξ}^{M}^{)}dξ_{N}dη_{N}.

(2.6)

Using (2.5) and (2.6), we estimate the difference

|f_{1}^{N}(x, t)−f_{1}^{M}(x, t)|

≤ Z

R^{2N}

n
f_{0}

xe^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t_{0}

S_{b}(s,η_{N})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds

e^{−K}^{a}^{(t,ξ}^{N}^{)}

−f_{0}

xe^{−K}^{a}^{(t,ξ}^{M}^{)}−
Z t

t_{0}

S_{b}(s,η_{M})e^{−K}^{a}^{(s,ξ}^{M}^{)}ds

e^{−K}^{a}^{(t,ξ}^{M}^{)}

×fξ_{N},η_{N}(ξ_{N},η_{N})o

dξ_{N}dη_{N}

≤ Z

R^{2N}

nf_{0}

xe^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t_{0}

S_{b}(s,η_{N})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds

× |e^{−K}^{a}^{(t,ξ}^{N}^{)}−e^{−K}^{a}^{(t,ξ}^{M}^{)}|fξ_{N},η_{N}(ξ_{N},η_{N})o

dξ_{N}dη_{N}
+

Z

R^{2N}

n
f_{0}

xe^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t_{0}

S_{b}(s,η_{N})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds

−f0

xe^{−K}^{a}^{(t,ξ}^{M}^{)}−
Z t

t_{0}

Sb(s,η_{M})e^{−K}^{a}^{(s,ξ}^{M}^{)}ds

e^{−K}^{a}^{(t,ξ}^{M}^{)}

×fξ_{N},η_{N}(ξ_{N},η_{N})o

dξ_{N}dη_{N}

=: (I1) + (I2).

Henceforth, concerning notation, we will denote byCany constant independent of N,tandx, so that the notation will not become cumbersome. CallLthe Lipschitz constant off0.

Now we introduce two inequalities that are direct consequence of Cauchy-Schwarz inequality and that will play a crucial role later on:

kKa(t,ξ_{N})−Ka(t,ξ_{M})k_{L}2(Ω)=E
hZ t

t0

(aN(s)−aM(s)) ds2i1/2

≤√ t−t0E

hZ t t0

(aN(s)−aM(s))^{2}dsi1/2

≤CkaN −a_{M}kL^{2}([t_{0},T]×Ω),

(2.7)

Z T
t_{0}

E[|S_{b}(s,η_{N})−S_{b}(s,η_{M})|^{2}]^{1/2}ds≤Ckb_{N} −b_{M}k_{L}2([t_{0},T]×Ω). (2.8)
Now we find a bound for (I1). Sincef_{0} is Lipschitz continuous (therefore uni-
formly continuous) and integrable onR, it is bounded on the real line.

Recall that if a functionf belongs to L^{1}(R) and is uniformly continuous onR,
then limx→∞f(x) = 0, and as a consequencef is bounded on R. Indeed, suppose
that limx→∞f(x)6= 0. By definition, there is an0 >0 and a sequence{xn}^{∞}_{n=1}
increasing to∞such that|f(x_{n})|> _{0}. We may assumex_{n+1}−x_{n}>1, forn≥1.

By uniform continuity, there exists aδ=δ(_{0})>0 such that|f(x)−f(y)|< _{0}/2, if

|x−y|< δ. We may assume 0< δ <1/2, so that{(x_{n}−δ, x_{n}+δ)}^{∞}_{n=1}are pairwise
disjoint intervals. We have |f(x)| > _{0}/2 for all x ∈ (x_{n}−δ, x_{n} +δ) and every
n∈N. Thereby,R

R|f(x)|dx≥P∞ n=1

Rxn+δ

xn−δ |f(x)|dx≥P∞

n=1δ0=∞, which is a contradiction.

Therefore f0

xe^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t_{0}

Sb(s,η_{N})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds

≤C.

To bound |e^{−K}^{a}^{(t,ξ}^{N}^{)}−e^{−K}^{a}^{(t,ξ}^{M}^{)}|, we use the Mean Value Theorem to the real
function e^{−x}, for each fixedtandξ_{N}. We have

e^{−K}^{a}^{(t,ξ}^{N}^{)}−e^{−K}^{a}^{(t,ξ}^{M}^{)}=−e^{−δ}^{t,ξ}^{N}{K_{a}(t,ξ_{N})−K_{a}(t,ξ_{M})},

where Ka(t,ξ_{N}) ≤ δt,ξ_{N} ≤ Ka(t,ξ_{M}) or Ka(t,ξ_{M}) ≤ δt,ξ_{N} ≤ Ka(t,ξ_{N}), which
implies

e^{−δ}^{t,ξ}^{N} ≤e^{−K}^{a}^{(t,ξ}^{N}^{)}+ e^{−K}^{a}^{(t,ξ}^{M}^{)}.
Thus,

|e^{−K}^{a}^{(t,ξ}^{N}^{)}−e^{−K}^{a}^{(t,ξ}^{M}^{)}| ≤(e^{−K}^{a}^{(t,ξ}^{N}^{)}+e^{−K}^{a}^{(t,ξ}^{M}^{)})|Ka(t,ξ_{N})−Ka(t,ξ_{M})|. (2.9)
We have the bound

f0

xe^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t0

Sb(s,η_{N})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds

|e^{−K}^{a}^{(t,ξ}^{N}^{)}−e^{−K}^{a}^{(t,ξ}^{M}^{)}|

≤C(e^{−K}^{a}^{(t,ξ}^{N}^{)}+ e^{−K}^{a}^{(t,ξ}^{M}^{)})|Ka(t,ξ_{N})−Ka(t,ξ_{M})|.

Then (I1) is bounded by the expectation of the above expression:

(I1) = Z

R^{2N}

n f0

xe^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t0

Sb(s,η_{N})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds

|e^{−K}^{a}^{(t,ξ}^{N}^{)}

−e^{−K}^{a}^{(t,ξ}^{M}^{)}|f_{ξ}_{N}_{,η}_{N}(ξ_{N},η_{N})o

dξ_{N}dη_{N}

≤CE[(e^{−K}^{a}^{(t,ξ}^{N}^{)}+ e^{−K}^{a}^{(t,ξ}^{M}^{)})|Ka(t,ξ_{N})−Ka(t,ξ_{M})|].

By H¨older’s inequality, hypothesis (4) and bound (2.7),

(I1)≤C(ke^{−K}^{a}^{(t,ξ}^{N}^{)}k_{L}2(Ω)+ke^{−K}^{a}^{(t,ξ}^{M}^{)}k_{L}2(Ω))kKa(t,ξ_{N})−Ka(t,ξ_{M})k_{L}2(Ω)

≤CkKa(t,ξ_{N})−Ka(t,ξ_{M})k_{L}2(Ω)≤CkaN −aMk_{L}2([t0,T]×Ω).

Now we bound (I2). Using the Lipschitz condition and the triangular inequality,

f0

xe^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t0

Sb(s,η_{N})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds

−f0

xe^{−K}^{a}^{(t,ξ}^{M}^{)}−
Z t

t0

Sb(s,η_{M})e^{−K}^{a}^{(s,ξ}^{M}^{)}ds

≤L|x| |e^{−K}^{a}^{(t,ξ}^{N}^{)}−e^{−K}^{a}^{(t,ξ}^{M}^{)}|
+L

Z t
t_{0}

|S_{b}(s,η_{N})e^{−K}^{a}^{(s,ξ}^{N}^{)}−S_{b}(s,η_{M})e^{−K}^{a}^{(s,ξ}^{M}^{)}|ds

≤L|x| |e^{−K}^{a}^{(t,ξ}^{N}^{)}−e^{−K}^{a}^{(t,ξ}^{M}^{)}|+L
Z t

t0

|Sb(s,η_{N})| |e^{−K}^{a}^{(s,ξ}^{N}^{)}−e^{−K}^{a}^{(s,ξ}^{M}^{)}|ds
+L

Z t t0

e^{−K}^{a}^{(s,ξ}^{M}^{)}|Sb(s,η_{N})−Sb(s,η_{M})|ds

=: (B1) + (B2) + (B3).

Using the bound (2.9) from the mean value theorem,

(B1)≤L|x|(e^{−K}^{a}^{(t,ξ}^{N}^{)}+ e^{−K}^{a}^{(t,ξ}^{M}^{)})|Ka(t,ξ_{N})−Ka(t,ξ_{M})|,
(B2)≤L

Z T t0

|Sb(s,η_{N})|(e^{−K}^{a}^{(s,ξ}^{N}^{)}+ e^{−K}^{a}^{(s,ξ}^{M}^{)})|Ka(s,ξ_{N})−Ka(s,ξ_{M})|ds.

Since (I2) =

Z

R^{2N}

n f0

xe^{−K}^{a}^{(t,ξ}^{N}^{)}−
Z t

t0

Sb(s,η_{N})e^{−K}^{a}^{(s,ξ}^{N}^{)}ds

−f0

xe^{−K}^{a}^{(t,ξ}^{M}^{)}

− Z t

t0

Sb(s,η_{M})e^{−K}^{a}^{(s,ξ}^{M}^{)}ds

e^{−K}^{a}^{(t,ξ}^{M}^{)}fξ_{N},η_{N}(ξ_{N},η_{N})o

dξ_{N}dη_{N}

≤E[(B1)·e^{−K}^{a}^{(t,ξ}^{M}^{)}] +E[(B2)·e^{−K}^{a}^{(t,ξ}^{M}^{)}] +E[(B3)·e^{−K}^{a}^{(t,ξ}^{M}^{)}]

=: (E1) + (E2) + (E3),

we need to bound these three expectations (E1), (E2) and (E3). First, for (E1), using H¨older’s Inequality, hypothesis (4) and (2.7), one obtains

(E1)≤L|x| ke^{−K}^{a}^{(t,ξ}^{M}^{)}k_{L}4(Ω)(ke^{−K}^{a}^{(t,ξ}^{N}^{)}k_{L}4(Ω)

+ke^{−K}^{a}^{(t,ξ}^{M}^{)}k_{L}4(Ω))kK_{a}(t,ξ_{N})−K_{a}(t,ξ_{M})k_{L}2(Ω)

≤C|x| kKa(t,ξ_{N})−K_{a}(t,ξ_{M})kL^{2}(Ω)≤C|x| kaN −a_{M}kL^{2}([t_{0},T]×Ω).
By an analogous reasoning, but using (2.8), one deduces the following bounds:

(E2)≤L Z T

t0

E[|Sb(s,η_{N})|^{p}]^{1/p}E[e^{−qK}^{a}^{(t,ξ}^{M}^{)}]^{1/q}

E[e^{−q K}^{a}^{(s,ξ}^{N}^{)}]^{1/q}
+E[e^{−q K}^{a}^{(s,ξ}^{M}^{)}]^{1/q}

E[|K_{a}(s,ξ_{N})−K_{a}(s,ξ_{M})|^{2}]^{1/2}ds

≤C Z T

t_{0}

E[|Sb(s,η_{N})|^{p}]^{1/p}kKa(s,ξ_{N})−Ka(s,ξ_{M})k_{L}2(Ω)ds

≤CkaN −aMkL^{2}([t_{0},T]×Ω)

Z T
t_{0}

E[|Sb(s,η_{N})|^{p}]^{1/p}ds

≤CkaN −aMk_{L}2([t0,T]×Ω)kSb(t,η_{N}(ω))k_{L}p([t0,T]×Ω)

≤Cka_{N} −a_{M}k_{L}2([t_{0},T]×Ω)

and

(E3) =L Z T

t0

E[e^{−K}^{a}^{(t,ξ}^{M}^{)}e^{−K}^{a}^{(s,ξ}^{M}^{)}|Sb(s,η_{N})−Sb(s,η_{M})|] ds

≤L Z T

t0

ke^{−K}^{a}^{(t,ξ}^{M}^{)}k_{L}4(Ω)ke^{−K}^{a}^{(s,ξ}^{M}^{)}k_{L}4(Ω)kSb(s,η_{N})

−S_{b}(s,η_{M})k_{L}2(Ω)ds

≤C Z T

t_{0}

kSb(s,η_{N})−S_{b}(s,η_{M})kL^{2}(Ω)ds

≤CkbN −bMk_{L}2([t0,T]×Ω).
Thus

(I2)≤(E1)+(E2)+(E3)≤C(|x|+1)kaN−aMk_{L}2([t0,T]×Ω)+CkbN−bMk_{L}2([t0,T]×Ω).
SincekaN−aMk_{L}2([t0,T]×Ω)→0 andkbN−bMk_{L}2([t0,T]×Ω)→0 whenN, M → ∞,
the sequence {f_{1}^{N}(x, t)}^{∞}_{N}_{=1} is Cauchy in L^{∞}(J ×[t0, T]) for every bounded set
J ⊆R. Let

g(x, t) = lim

N→∞f_{1}^{N}(x, t),

x∈Randt∈[t_{0}, T]. Let us see thatx(t,·) is absolutely continuous andg(·, t) is a
density ofx(t,·).

First, note thatg(·, t)∈L^{1}(R), since by Fatou’s Lemma [27, Lemma 1.7, p.61],
Z

R

g(x, t) dx= Z

R

N→∞lim f_{1}^{N}(x, t) dx≤lim inf

N→∞

Z

R

f_{1}^{N}(x, t) dx

| {z }

=1

= 1<∞.

Recall that

x_{N,N}(t, ω) =x_{0}(ω)e

Rt

t0aN(s,ω) ds

+ Z t

t_{0}

b_{N}(s, ω)e^{R}^{s}^{t}^{a}^{N}^{(r,ω) dr}ds.

We check thatxN,N(t, ω)^{N}−→^{→∞}x(t, ω) for everyt∈[t0, T] and a.e.ω∈Ω.

We know thataN(·, ω)→a(·, ω) in L^{2}([t0, T]) andbN(·, ω)→b(·, ω) in L^{2}([t0, T])
asN → ∞, for a.e. ω∈Ω, because the Fourier series converges in L^{2}.

On the one hand,Rt

t0a_{N}(s, ω) ds^{N}−→^{→∞}Rt

t0a(s, ω) dsfor allt∈[t_{0}, T] and for a.e.

ω∈Ω, whence

x_{0}(ω)e

Rt

t0aN(s,ω) ds N→∞

−→ x_{0}(ω)e

Rt

t0a(s,ω) ds

, (2.10)

for allt∈[t_{0}, T] and for a.e.ω∈Ω.

On the other hand,

bN(s, ω)e^{R}^{s}^{t}^{a}^{N}^{(r,ω) dr}−b(s, ω)e^{R}^{s}^{t}^{a(r,ω) dr}

≤ |bN(s, ω)−b(s, ω)|e^{R}^{s}^{t}^{a}^{N}^{(r,ω) dr}+|b(s, ω)|

e^{R}^{s}^{t}^{a}^{N}^{(r,ω) dr}−e^{R}^{s}^{t}^{a(r,ω) dr}
.
We bound the expressions involving exponentials. First, using the deterministic
Cauchy-Schwarz inequality for integrals, one gets

e^{R}^{s}^{t}^{a}^{N}^{(r,ω) dr}≤e

√T−t0kaN(·,ω)k_{L2 ([t}

0,T])≤e^{C}^{ω} =C_{ω},

where C_{ω} represents a constant depending onω, and independent of N, t andx.

By the Mean Value Theorem applied to the real function e^{x},
e^{R}^{s}^{t}^{a}^{N}^{(r,ω) dr}−e^{R}^{s}^{t}^{a(r,ω) dr}= e^{δ}^{N,s,t,ω}Z t

s

aN(r, ω) dr− Z t

s

a(r, ω) dr , where

|δN,s,t,ω| ≤maxn

Z t s

aN(r, ω) dr ,

Z t

s

a(r, ω) dr o

≤p

T −t_{0}max

ka(·, ω)k_{L}2([t_{0},T]),ka_{N}(·, ω)k_{L}2([t_{0},T]) ≤C_{ω}.

Thus,

e^{R}^{s}^{t}^{a}^{N}^{(r,ω) dr}−e^{R}^{s}^{t}^{a(r,ω) dr}
≤Cω

Z t s

aN(r, ω) dr− Z t

s

a(r, ω) dr

≤C_{ω}kaN(·, ω)−a(·, ω)kL^{2}([t_{0},T]).
Therefore,

Z t t0

bN(s, ω)e^{R}^{s}^{t}^{a}^{N}^{(r,ω) dr}−b(s, ω)e^{R}^{s}^{t}^{a(r,ω) dr}
ds

≤Cω

kbN(·, ω)−b(·, ω)kL^{1}([t_{0},T])+kb(·, ω)kL^{1}([t_{0},T])k

×aN(·, ω)−a(·, ω)k_{L}2([t0,T])

N→∞−→ 0.

(2.11)

This shows that xN,N(t, ω) → x(t, ω) as N → ∞ for every t ∈ [t0, T] and a.e.

ω∈Ω. This says thatxN,N(t,·)→x(t,·) converges a.s. asN → ∞, therefore there is convergence in probability law:

lim

N→∞F_{N}(x, t) =F(x, t),

for everyx∈Rwhich is a point of continuity ofF(·, t), whereFN(·, t) andF(·, t)
are the distribution functions ofxN,N(t,·) and x(t,·), respectively. Since f_{1}^{N}(x, t)
is the density ofxN,N(t, ω),

FN(x, t) =FN(x0, t) + Z x

x0

f_{1}^{N}(y, t) dy. (2.12)
Ifxandx_{0}are points of continuity ofF(·, t), taking limits whenN → ∞we obtain

F(x, t) =F(x0, t) + Z x

x0

g(y, t) dy (2.13)

(recall that {f_{1}^{N}(x, t)}^{∞}_{N=1} converges to g(x, t) in L^{∞}(J ×R) for every bounded
set J ⊆ R, so we can interchange the limit and the integral). As the points of
discontinuity ofF(·, t) are countable andF(·, t) is right continuous, we obtain

F(x, t) =F(x_{0}, t) +
Z x

x_{0}

g(y, t) dy

for allx0andxinR. Thus,g(x, t) =f1(x, t) is a density forx(t, ω), as wanted.

2.2. Obtaining the density function when b = 0 and f0 is Lipchitz on R. If b = 0, all our previous exposition can be adapted to approximate the density function of the solution of the randomized non-autonomous homogeneous linear differential equation associated to the initial value problem (1.1). In this case, the solution stochastic process is

x(t, ω) =x0(ω)e

Rt

t0a(s,ω) ds

. (2.14)

We only need the Karhunen-Lo`eve expansion of the stochastic processa,
a(t, ω) =µ_{a}(t) +

∞

X

j=1

√ν_{j}φ_{j}(t)ξ_{j}(ω),

where {(νj, φj)}^{∞}_{j=1} are the corresponding pairs of eigenvalues and eigenfunctions
and{ξj}^{∞}_{j=1}are random variables with zero expectation, unit variance and pairwise