MMETSummary

Probability & Combinatorics

  • An event is an outcome of an experiment.
  • The sample space is a set that represents all the possible outcomes of that experiment, all possible events that can happen.
  • Disjoint events are separate, they cannot occur together. They are mutually exclusive.
  • For an event , the probability of , denoted is a number between 0 and 1
  • The probability of two events and occurring together is
  • A permutation is some arrangement of elements from a set
  • A partial permutation is some arrangements of only a part of the elements from a set
  • Multiset permutations are permutations of sets containing some indistinguishable elements, that is, different elements that when taken into account are practically the same.
  • If you have a set of distinct elements, the number of permutations of those elements, that is the number of combinations you can make with those elements is equal to
  • The number of partial permutations(k-permutations) for elements out of distinct elements is
  • A combination is a group of some elements chosen among some other elements in a set or space
  • The number of combinations of elements chosen from distinct elements in total is , known as the binomial coefficient, read "n choose k", where the order of the elements doesn't matter
  • The number of combinations of elements divided into groups of size is , where element order does not matter but group order does
  • For two events and of non-zero probability, the conditional probability of given , that is, the probability that happens given an event is denoted and defined
  • The Bayes Formula shows that the probability of each event of a set of events picked from the sample space, given another event is
    where the events are other events that can happen, and the event represents some information
  • Two events are independent if the occurrence of one does not influence the occurrence of the other
  • If two events are independent then it follows that their complements(the events not happening) are also independent
  • Sometimes for events we are not interested in the actual event but some value we can get from that event. Such value is called a random variable.
  • For some random variable , the law, or distribution of this random variable is the probability that we get a value
  • A discrete random variable is a random variable that can take at most some countable (finite) number of values.
  • The probability mass function of such random variable is denoted and is the probability from 0 to 1 that this random variable takes on a specific value.
  • The expected value or the expectation of a discrete random variable is the value we can expect to be most likely to get for it, and is interpreted as the weighted average of the possible values each weighted by the probability mass function at that value,
  • A function of a discrete random variable is itself a discrete random variable.
  • The variance of a discrete random variable , denoted is how much a variable "varies" around its expectation, and is dependent on the expectation, defined in two equivalent ways:
    If we have a function of the discrete random variable in the form of a linear function , then we have that
    • The square root of the variance of a discrete random variable is called the standard deviation, denoted
Bernoulli Trial

A Bernoulli Trial is an event that is either a success or a failure (true/false). The discrete random variable in this case with probability of success is called the Bernoulli random variable of parameter , denoted , for which the probability that it's zero, and the probability that it's an one(a success) is , with its mean and variance given

Sequence of Bernoulli Trials (Binomial)

For a sequence of independent Bernoulli trials each with probability of success , the discrete random variable defined as the number of successes in those trials is called the binomial random variable with parameters the probability , and the number of trials , with probability mass function for successes, with mean and variance

Geometric Random Variable

For a sequence of independent Bernoulli trials, each with probability of success , the discrete random variable defined as the number of trials to get the first success is called the geometric random variable with parameter the probability denoted by , with probability mass function with mean and variance

Pascal random variable/Negative Binomial

For a sequence of independent bernoulli trials with probability of success , the discrete random variable defined as the number of trials to get the -th success is called the negative binomial or Pascal random variable with parameters and is denoted by , with probability mass function , with mean and variance

Poisson random variable

The number of events occurring independently at a constant rate is modeled through the very important Poisson random variable. For a rate the discrete random variable with probability mass function is called the Poisson random variable with parameter the rate , and is denoted , with mean and variance

Law of rare events

For a rate and sequence of binomial random variables , the probability mass function of converges pointwise to a Poisson random variable with parameter , with
In practice, the number of successes in a large number of independent Bernoulli trials, each with probability of success is approximated by a Poisson random variable of parameter if is moderate. This result holds in general for and

Hypergeometric Random Variable

Consider a basket with balls of which are colored and the rest () are not colored, drawing balls without replacing them. The discrete random variable defined as the number of colored balls drawn is named the hypergeometric random variable with parameters and denoted , with probability mass function

which for very large , that is, for a large amount of such events, compared to , the number of balls(events) drawn, can be reasoned as taking them without replacing them is not that entirely different from taking them WITH replacement. And so, for a hypergeometric random variable is approximated with a binomial random variable of parameters and , with

and mean and variance

Continuous Random Variables and Probability Density Function

The probability theory studied so far only accounts for countable events, but in practice we deal with mostly uncountables. So we need some sort of continuity to account for uncountables.
A random variable is a continuous random variable if there exists a function called the probability density function (PDF) such that:

  1. The probability that falls within any interval is given by the integral of ​ over that interval:
  2. The PDF is non-negative for all :
  3. The total area under the PDF is 1:
Cumulative Distribution Function

The cumulative distribution function (CDF) of a continuous random variable is a continuous function defined as
From this, we understand that the probability density function is the derivative of the cumulative distribution function, , and the probability that is part of some interval can be calculated using the fundamental theorem of calculus, with
For a small interval of , the probability that is included in it can be approximated to

Expectations of Continuous Random Variables
  • The expectation or expected value of a continuious random variable with PDF is defined
  • If the continuous random variable can only take non-negative values, then its expectation is defined , where is called the survival function.
  • If we have that the continuous random variable is defined as a function of another continuous random variable, with a function of , , then the expected value/expectation becomes , from which we once again have linearity:
Variance of Continuous Random Variables

The variance for a continuous random variable is given as
Just like we had the variance for discrete random variables, it is the same for the continous case, with the alernative formula being , and if is a linear function, we have

Exponential Random Variable

For some rate , then the CRV having PDF is called the exponential random variable with parameter , denoted ~Exponential(), with CDF and mean and variance
For an exponential random variable, we state that it has a lack of memory, that is, the distribution of the remaining waiting time does not depend on the time already elapsed. Assume that the number of events occuring in some time interval , for each time it happens is defined as a Poisson random variable with parameter , so that ~Poisson(), then we define the Poisson Process:

  • The Poisson process: The random time at which the first event occurs is an exponential random variable with parameter , such that ~Exponential()
Gamma Random Variable

For a rate and shapr the CRV with PDF is called the gamma random variable of parameters denoted ~Gamma() with mean and variance
Recall that the gamma function is defined as

Uniform Random Variable

For the unit interval, that is from 0 to 1, , the CRV with PDF is called the uniform random variable over the interval , denoted ~Uniform(), with CDF and mean/variance:
When we have some other interval, , the CRV with PDF is called the uniform random variable over denoted by ~Uniform(), with CDF and mean and variance:

Standard Gaussian(normal) Random Variable

The CRV with PDF is called the standard Gaussian (normal) random variable denoted ~, with mean and variance

Gaussian (normal) Random Variable

For a mean and variance the CRV with PDF is called the Gaussian (normal) random variable with params and and denoted by ~, with mean and variance

Using Gaussian Tables

For a Gaussian Random Variable, with mean(expectation) and variance , ~, the probability that it belongs to an interval is defined as
where is a standard Gaussian random variable. The values of are everything needed to compute probabilities with Gaussian random variables. For the values are given in a table. For the values of are given via the identity

Joint Distributions

For two random variables , the joint law or joint distribution of and is the probability measure that a single point of these two variables (both of them at once) belong to some kind of subset.

Joint Cumulative Distribution Function

For two random variables the joint cumulative distribution function is defined
and is non-decreasing in each component and is between 0 and 1.

Marginal Laws

The marginal laws or marginal distribution of random variables are the probability measures

From this it becomes obvious that the cumulative distribution functions for and individually can be retrieved from their joint cumulative distrubution function as
and are called the marginal cumulative distribution functions

Joint Probability Mass Functions and Marginal JPM

The joint probability mass function of the variables and is defined
from which we find that the probability mass functions of and individually can be retrieved from their joint probability mass function by
and are promptly called marginal probability mass functions.
The joint probability mass function of two discrete random variables is positive for at most a countable number of paired values, sums up to a total of 1 and is usually represented using tables.

Joint Continuous Random Variables

Two random variables are called jointly continuous random variables if there exists a non negative function such that it is integrable and for each subset the joint probability density function of these continuous random variables exists and is defined as
The joint cumulative distribution function is a continuous function, defined
from which we find that it is the second cross-derivative of the cumulative distribution function: and that the probability that and are included in intervals and can be computed as

The probability density functions of and can be retrieved from their joint probability density function as
and are named marginal probability density functions. The marginal cumulative distribution functions of and are retrieved as

Independence of Random Variables

Two random variables are independent if for each pair of subsets , we have that , and their independence is usually denoted by .
Alternatively, we can re-define independence of random variables as for each pair of subsets the events and are independent.
We can also re-define it as if are independent, their joint distribution factorizes into the product of marginal distributions,
From this we can find that two random variables and are independent if for each we have
meaning the joint cumulative distribution function factorizes into the product of the marginals.
Additionally, for discrete random variables, in terms of probability mass functions,
For continuous random variables in terms of probability density functions,

Sums of independent random variables

For some random variable as a sum of two other independent random variables , so that , the distribution of is their convolution.
For discrete random variables,
For continuous random variables,

Triangular Distribution

For two continuous independent uniform random variables ~Uniform(0,1), ~Uniform(0,1), the probability density function of is and is called the triangular distribution.

More Cases of Convolutions

For discrete random variables:

  • For X~Binomial(n,p) and Y~Binomial(m,p), independently, then ~Binomial
  • For X~Poisson and Y~Poisson(), independently, then ~Poisson()
    For continuous random variables:
  • If X~Gamma() and Y~Gamma(), independently, then ~Gamma()
  • If X~ and ~ , independently, then ~
Independent and Identically Distributed Random Variables

A sequence of random variables is independent and identically distributed (i.i.d.) if there exists a CDF such that and , are independent for every
This concept is used to characterize the binomial and gamma random variables through i.i.d sequences of SIMPLER random variables:

  • For an i.i.d sequence of Bernoulli(p) random variables ~Binomial()
  • For an i.i.d sequence of Exponential() random variables ~Gamma()
Generalization of Mean(Expectation)

The formula learned before for the mean(expectation) can be generalized to two random variables. For two random variables and , and some function ,

  • For discrete random variables, the expectation of the discrete random variable is
  • For jointly continuous random variables the expected value of the continuous random variable is
Expectation of sums of random variables

For a sum of discrete variables , the expected value of their sum can be found as

Covariance

The variance of a random variable measures how much it changes around its expectation. So, how do we measure the joint variability of two random variables?
The covariance of random variables and is defined as
Two random variables and are uncorrelated if they have zero covariance:

Properties of Covariance
  • Alternative formula:
  • The covariance of a variable with itself equals its own variance.
  • If is a constant random variable,
  • If is a linear function,
  • Bilinearity: For sequences of random variables and
  • Consequence of bilinearity: The variance of the sum of random variables is
    In particular, if are pairwise uncorrelated, then
Expectation and Independence

For two discrete or continuous random variables and , and two real-valued functions and , if and are independent random variables, then , from which we find a very simple but extremely important result:
The inverse is NOT TRUE except only for Bernoulli random variables and bivariate Gaussian random variables(this is not studied in this course), for which zero correlation implies independence.

Expectations of i.i.d sequences

For an i.i.d sequence of random variables such that , focusing on their sum,

  • The expectation of their sum is
  • The variance of their sum is
    These two properties can be used to compute mean and variance of binomial and gamma random variables and sums of i.i.d Bernoulli and exponential sequences respectively
Correlation

For two random variables with strictly positive variances, their correlation is defined as

and is a measure of how much these variables are related. The value of the covariance is a number between and .

Perfect correlation

Two random variables are perfectly correlated if their squared correlation is -1, that is,
We find then that two random variables are perfectly correlated if and only if is a linear function of ,

This highlights that the squared correlation is a measure of how linearly dependent these two variables and are.

Analysis

Complex Numbers, Rectangular Form, Polar Form, Exponential Form

A complex number is the number where is the real part and is the imaginary part, where , the imaginary number is defined . This is called its rectangular form , represented as the two sides of a rectangle.

Representing the complex number as a vector, we define the modulus or absolute value of the complex number to be the magnitude of the vector, and the argument or angle of the complex number , which both are defined as
This is called the polar form, from which we define also the exponential form that stems from Euler's Identity, where and we have the exponential form for representing the complex number :

The Complex Conjugate and Inverse

The complex conjugate of a complex number is the complex number that is obtained by reflecting about the real axis: , from which we obtain a very important formula:
The Inverse of a complex number is the complex number such that

Uniqueness of The Argument

The argument is not unique. Infinitely many complex numbers can have the same argument.
Similarly, a complex numbers can have infinite values for the argument because of periodicity of the sinusoidals. The principal argument is the only argument found in the first period of the function.

Periodicity of The Complex Exponential

Let , then

Complex Trigonometric and Hyperbolics

Topological Notions

For any complex number and radius we define an open ball of radius centered at as the set of all complex numbers inside that ball(circle)
For any set of complex numbers we have:

  • is open if for every there exists at least one ball of radius part of this set
  • A complex number belongs to the boundary of the set if for every the intersection of the ball of radius with both the set and its complement is not empty, i.e. it belongs to the boundary if it's at its outermostedge, so that if we look somewhere within that radius we will always find a complex number not part of the set
  • The closure of the set is literally the closure, what this set encloses. The union of and its boundary
  • The interior set of the set is the inner part of ,
  • Any point is an accumulation point for the set if the intersect of the ball of radius with the set contains infinitely many points, i.e. if we have infinitely many points around it
Complex Functions and Curves in the Complex Plane

A function of a complex number can be expressed as a composition of functions of its real and imaginary parts:
A function of a complex number is continuous if the functions of its real and imaginary parts are continuous:

Limits and Continuity in the Complex Plane

The ordinary rules of limits and continuity hold for complex numbers, where we can treat a complex number as the sum of two real functions.

Differentiability of Complex Functions

For any open set in the complex plane a function of a complex variable is differentiable at a point in this set if the limit definition of the derivative holds for this function, in the case that it holds, it is called the complex derivative of at

The Cauchy-Riemann Equations

The Cauchy-Riemann equations expess an unique approach to complex derivatives. For a function of a complex variable we express it as a linear combination of functions of its real and imaginary parts
In this case we have that is differentiable at a point if and only if both and are differentiable at and the Cauchy-Riemann Equations are satisfied:

If this is the case, we have that

Holomorphic(Analytic) and Entire Functions

A function is holomorphic or analytic if it is differentiable everywhere in a set or domain.
A function is entire if it is holomorphic over the entire complex plane

Harmonic Functions

A function is harmonic if its laplacian is zero everywhere

  • holomorphic harmonic, the inverse is not true
  • If is holomorphic, is called the harmonic conjugate of
    CAUTION: harmonic conjugate of harmonic conjugate of unless and are constant
Simple, Closed and Jordan Curves

A curve is

  • Simple if it never intersects itself
  • Closed if it starts at ends at the same point
  • Smooth if it is differentiable
  • Jordan Curve if it is simple and closed
Integrals of Complex Curves

Linearity also holds for integration in the complex plane, where we have that for , the integral is expressed

Arc Length of a Complex Curve

The length of a piecewise(at least) differentiable curve is given by the integral

Integrals over Curves of Complex Functions

For some continuous function and a differentiable curve , the line integral of along is defined

Integrals over Boundaries

A domain is just an open and connected set of complex numbers and is regular if its boundary is a combination of many images of jordan curves pieced together. Then its integral is the length of all these jordan curves together:

Green's Theorem Refresher (Gauss-Green Formula)

For some regular domain , bounded and a vector field differentiable over and continuous on at least the closure , we have that we can find the line integral of the field along the boundary of its curve as another integral along the entire curve

The Cauchy-Goursat Theorem

For some regular domain if a function is holomorphic on and continuous on then the integral of that function along the boundary of that domain is zero:

Jordan's Curve Theorem

The integral of a holomorphic function over a Jordan curve is zero

Winding Number

The winding number or index of a point in the complex plane not part of this curve is defined as
and represents the amount of times this curve encircles this point counter-clockwise. The sign is negative if it's clockwise.

The Cauchy Integral Formula

For some holomorphic function function on a regular domain , for any , the Cauchy Integral Formula is defined

The Cauchy Integral Formula For Derivatives

The above is useful because it extends to derivatives, and we have that for a function holomorphic over a regular domain , for any , the derivative of exists and is given by

Holomorphic Functions are Infinitely Differentiable

The Cauchy Integral Formula proves that a holomorphic function of complex variable is infinitely differentiable, and the functions of its real and imaginary parts composing it are also infinitely differentiable.

Liouville's Theorem

Liuovulle's Theorem is a direct consequence of the Cauchy Itnegral Formula and states that if a complex function is entire and bounded, then it is constant, i.e. its first drivative is zero
THIS IS AN ONE-WAY IMPLICATION. THE INVERSE IS NOT TRUE.

The Fundamental Theorem of Algebra

Every non-constant polynomial (polynomial of degree >1) has complex roots. Moreover, complex roots that involve an imaginary part come in conjugate pairs. If a polynomial has a complex root, then its conjugate must necessarily also be a root.

Sequences and Series of Complex Numbers

Sequences and Series of Complex Numbers are quite literally the same as those of real numbers, but for convergence we consider both their rael and imaginary parts

Complex Power Series

For some point in the complex plane , the power series centeret at with coefficients of a sequence is defined as
for any point towards which it converges.

Radius Of Convergence

For any power series of complex variables there exists a radius of convergence such that for any the series converges. Moreover, if the series diverges. If we ahve we can't really know whether it converges or not.

We can find the radius of convergence by studying the following limits, and if either exists it is equal to :

Complex Power Series are Holomorphic

Every complex series with radius of convergence is holomorphic, and as a result, infinitely differentiable, with for the -th derivative of the complex power series

Complex Taylor Series Expansion

From the above we get the definition of the Taylor Series Expansion, where any holomorphic function can be approximated locally as a convergent power series of radius ,

Identity Principle for Holomorphic Functions

Two holomorphic functions of complex variables that are equivalent for a given domain are also equivalent for some other domain they're defined in

Annuli

An annulus is a ring. All complex numbers in a ring formed by circles of radii and another

The Lauent Expansion

For some point and holomorphic on the annulus i.e. the set of values that are in the ring between these two circles , there exist for such that
where . Moreover, the above double series, defined as the Laurent Expansion,
is unique. WARNING: A complex function may have different Laurent expansions in different annuli.

Isolated Singularities

If a function is holomorphic(differentiable) everywhere around some point but not necessarily on the point itself, then this point is an isolated singularity for that function.

Principal Parts, Essential Singularities, Poles and Residues

For some point which is an isolated singularity for a function with its Laurent Expansion defined

  • The first term of the Laurent Expansion, is called the principal part of at
  • is a removable singularity if its principal part is zero
  • is a pole of order if and for :
    • A simple pole if
    • A double pole if
  • The residue of at is defined

Simplifying Poles and Residues

  • For some function and another holomorphic function , a point is a simple pole for if and only if and , in which case the residue can be computed as

  • Similarly, is a pole of order for at if and only if where we have holomorphic around with , and the residue is computed as

  • If a function is expressed as a ratio of two other functions , with and holomorphic around with then is a simple pole for and the residue for that pole is

    • The residue for some function that behaves like at some point is the value and essentially represents how the function "blows up" near that pole.

Residue Theorem

For some regular domain and some function continuous and holomorphic on that domain except for some points then the integral of the function on the boundary curve of that domain can be obtained by piecing together the residues at those points:

The Argument Principle

A complex number is a zero of order for a holomorphic function if there exists a ball of radius around that point such that for some function holomorphic on that ball,
We use this to define a theorem:

  • Let be a function holomorphic over a domain except for a finite number of points and let be some counterclockwise-oriented Jordan curve so that contains neither zeroes nor singularities of , and assuming that in the interior of the singularities of are all poles of order respectively, and that there are zeroes of of order respectivel, ten we have that

Support

The support of a complex-valued signal is defined as all the conjugates of all values for which the signal isn't zero:

Locally Integrable Signals

A signal is integrable on a finite interval if
A signal is locally integrable if it is integrable on every bounded finite interval

Indicator, Sign and Heaviside(Step) Functions
  • Indicator function of a set is =
  • Sign function is
  • The Heaviside Step Function is

Supremum Norm(Infinity-norm)

The supremum norm, also called the -norm of a signal over an interval is defined

Uniform Convergence

A sequence of signals with all of these signals defined over the same interval is uniformly convergent to on if

Test Functions and Compact Support

  • The support of a function is compact if it is zero everywhere except for some interval
  • A test function is a signal that is infinitely differentiable

Convergence of Test Functions

A sequence of test functions converges to in the domain if

  • We have an interval that contains its support, , and the infinity norms for all derivatives of this test function also converge to 0:

Functionals and Distributions

A set of test functions is a vector space.

  • Functionals are functions of signals. They are maps, and are denoted
    In a sense, the function itself is a variable, but, A FUNCTIONAL CAN ONLY RETURN A SCALAR
    For example, , for yields that

  • We have then that a functional is a distribution if it is:

    • Linear:
    • Continuous

Regular Distributions

A distribution is regular if there exists a locally integrable signal such that the functional :
is a distribution. When this is the case, is called the regular distribution associated to .

The Dirac Delta Distribution

The functional defined as
is a distribution called the Dirac's Delta distribution. For some constant , the Shifted Dirac Delta is defined
WARNING: The Dirac Delta is NOT a regular distribution. By convention we write for the delta but despite this, it is NOT a regular distribution.

Differentiation of Distributions

For some distribution the distributional derivative of is the distribution where
Every distribution has a distributional derivative.
For some signal differentiable everywhere except for some points which are jump discontinuities then we have that

Principal Value of 1/t, the distributional derivative of

The distribution denoted where p.v. is read as "principal value of" and is defined as
from which as a consequence we find that the distributional derivative of is defined

The Distributional Derivative of

For every test function we find that the distributional derivative of p.v. is defined
where P.f. stands for part finie.

Rescaling of Distributions

For a distribution and some nonzero value the rescaled distribution is defined

Multiplication of a Distribution by an Infinitely Differentiable Function

For some distribution and an infinitely differentiable signal the product distribution is defined

Convergence of Distributions

For some distribution and a sequence of distributions the sequence converges to , in the sense of distributions, much like a function converges to a limit, if
If a sequence of continuous functions converges uniformly to a continuous function then the disribution associated to that sequence also converges to the distribution associated to that function.

Distributions with Compact Supports

If a distribution has compact support then it is a linear continuous operator in that case.

Convolution of Functions

The convolution of two locally integrable signals and is the signal
defined for every for which the integral converges

Properties of Distributions and their Convolutions

For distributions ,


  • For distributions where is compact, we have that
    For some distribution

The Fourier Transform

For an absolutely integrable signal the Fourier Transform of is defined for a frequency as

Properties of The Fourier Transform

  • Linearity:
  • Frequency Shift:
  • Time Shift:
  • Time Scaling:
  • Transform of Even Function:
  • Transform of Odd Function:
  • The Fourier Transform is continuous.
  • Fourier Transform of the Derivative:
  • Derivative of the Fourier Transform:

Rapidly Decreasing Functions and the Fourier Transform

A signal is rapidly decreasing if it is infinitely differentiable and
Moreover, in this case,

The Inverse Fourier Transform

The Inverse Fourier Transform is defined

Inversion for Rapidly Decreasing Functions

For a rapidly decreasing signal with Fourier Transform , the inverse Fourier Transform is defined
with

Tempered Distributions

A functional is a tempered distribution if it is linear and continuous. Moreover, the restriction of a tempered distribution to is a distribution in and if then their restrictions to the same domain are different in
A distribution with compact support is tempered

Impulse Train(Dirac's Comb)

The impulse train also known as Dirac's Comb is the distribution
and is a tempered distribution.

Fourier Transforms of Tempered Distributions

The Fourier Transform of a tempered distribution is
Moreover, the Fourier Transform of a tempered distribution is also a tempered distribution

The Fourier transform of the distribution associated to an absolutely integrable signal is the distribution associated to the Fourier transform of the signal

Properties of Fourier Transforms of Tempered Distributions

  • If in then in
  • If is a tempered distribution then
  • and
    For :

The Inverse Fourier Transform

The inverse fourier transform of a tempered distribution is defined
The fourier transform then is invertible, and we find that
Important: If is a continuous integrable signal with its fourier transform continuous and integrable, then

Fourier Transform of a Convolution

The Fourier Transform of a Convolution is the product of the individual transforms:

Even and Odd Distribution

A distribution is:

  • Even:
  • Odd:
    The Fourier transform maintains function oddity: The transform of an even distribution is even and the transform of an odd distribution is odd.

Poisson Summation Formula

Let have Fourier transform , then

Fourier Transform of the Principal Value of 1/t

Laplace Transform

For a locally integrable signal , let , then the Laplace Transform of is the function

Abscissa and Set of Absolute Convergence

The set of absolute convergence of an integrable signal is the set of values of in the Laplace Domain for which the Laplace Transform has a finite value.
It follows then that a function is Laplace-Transformable if its set of convergence is non-empty i.e. is finite, where represents the abscissa of (absolute) convergence of some signal :

Linearity
Time Differentiation
Time Integration
Time Delay
Convolution
Final Value Theorem For a function with roots of denominator polynomial of
having strictly negative real part,
Initial Value Theorem

Laplace Transform Pairs

The Riemann-Fourier Formula: The Inverse Laplace Transform

For a continuous, laplace-transformable signal such that is integrable in for some then
The Inverse Laplace Transform may be extended to distributions: Let be homomorphic for some such that and such that , then

The Heaviside Formula

For some laplace transform where and are polynomials with no common roots where , with being the poles of , then the inverse laplace transform is
If the poles are all simple, then

Laplace Transform of Distributions

The space of rapidly decreasing functions is denoted
The space of tempered distributions is denoted
The Laplace Transform of a transformable distribution is

Laplace Transform of REGULAR Distributions

For laplace-transformable, setting , we have

Properties of Laplace Transforms of Distributions