Hi, antthrush-ga:
Let g(x) = x^2 + 2. This is a continuous function on the real
numbers, it has "even" symmetry, g(-x) = g(x), and it is strictly
increasing for nonnegative x.
We can show how to define/construct all possible solutions to:
f(f(x)) = g(x)
which share these characteristics. The construction is pretty easy,
but in a sense it uses mathematical induction.
First off let's consider what f(0) might be. Since g(0) = 2, it's not
hard to convince oneself that for f to be an even function and
strictly increasing on the nonnegative real numbers, we will need f(0)
> 0. Let's call f(0) = a.
Note that f(a) = f(f(0)) = g(0) = 2, and since f is strictly increasing:
0 < a implies a = f(0) < f(a) = 2
In fact we have an increasing sequence of real numbers:
x_0 = 0, x_1 = a, x_2 = 2, ... , x_k = f(x_{k-1})
with the property (since f(f(x)) = g(x)) that:
x_{k+2} = g(x_k) = (x_k)^2 + 2
Clearly the repeated applications of g(x) guarantee that the x_k's
will increase without limit (tend to positive infinity), so that every
nonnegative real number belongs to exactly one interval [x_k,
x_{k+1}).
The good news is that we can define f(x) to be any strictly increasing
function at all that maps [0,a] onto [a,2], and then the rest of the
definition of f(x) will automatically follow, giving a solution to
f(f(x)) = g(x).
Since the definition of f(x) on [0,a] = [x_0,x_1] determines its
definition for all nonnegative real numbers (and thus, because of its
evenness, on all real numbers), you will hopefully not be shocked to
learn that this definition is "piecemeal" in the sense of the
following:
/
| (f?¹(x))^2 + 2 for x in [a,2] = [x_1,x_2]
f(x) = <
| g?(f(g??(x))) for x in [x_{2n},x_{2n+2}]
\
A few words may clarify the definition. Since f is chosen to be
continuous and strictly increasing in mapping [0,a] to [a,2] =
[x_1,x_2], it has a continuous (and monotone increasing) inverse f?¹.
That first formula is then a matter of "one step backwards, two steps
forward" as we apply g to the result of f?¹, so for x in [a,2], we
move backwards once to [0,a], then ahead twice (of f, once of g) into
[2, a^2 + 2] = [x_2,x_3].
Then f:[x_{2k,x_{2k+2}] --> [x_{2k+1},x_{2k+2}] is defined by pulling
x back to [0,2] with k applications of g?¹, applying f there and
advancing again with k applications of g.
The continuity of this patchwork f depends on its continuity on each
of the pieces [x_k,x_{k+1}], together with the consistency of f at the
boundaries between the pieces. But f agrees at the boundary between
[0,a] and [a,2], because on the lower subinterval we have chosen
function f so that f(a) = 2, and on the upper subinterval, taking
f?¹(a) = 0 and then g(0) = 2 gives the same answer.
The consistency of f at all other boundaries (and thus f's continuity)
is a corollary of this agreement.
We give a couple of examples to illustrate that f can be defined
flexibly on the first subinterval [0,a], but we simplify the notation
from hereout by taking a = 1. Any value for a between 0 and 2 is
feasible, just more effort to express.
Example 1.
----------
Let f:[0,1] --> [1,2] be defined by f(x) = x + 1.
Then f:[1,2] --> [2,3] is defined by f(x) = x^2 - 2x + 3,
_____
f:[2,3] --> [3,6] is defined by f(x) = x + 2?x - 2 + 1,
_____
and f:[3,6] --> [6,11] by f(x) = x^2 - 4(x+1)?x - 2 + 6x - 5.
Example 2.
----------
Let f:[0,1] --> [1,2] be defined by f(x) = x^2 + 1.
Then f:[1,2] --> [2,3] is defined by f(x) = x + 1,
f:[2,3] --> [3,6] is defined by f(x) = x^2 - 2x + 3,
_____
and f:[3,6] --> [6,11] by f(x) = x + 2?x - 2 + 1.
The general formulation simplifies in both cases in that the formulas
on any particular interval [x_k,x_{k+1}] can be expressed "by
radicals" because it is in fact a composition of the polynomial
functions f and g and the easily expressed inverse g?¹:
_____
g?¹(x) = ?x - 2
Final Remarks
=============
The two examples above share a strange affinity. What is it and why
does it happen?
Neither of the examples above give differentiability of f at the
boundary "knots" x_k. What "knot", and what would the "smoothest"
possible solution be, characterized in terms of a definition f on
[0,a]?
regards, mathtalk-ga |
Clarification of Answer by
mathtalk-ga
on
16 Jul 2005 23:17 PDT
Antthrush-ga wrote:
> Hi Mathtalk,
> Your solution is a good start, but not terribly satisfying.
Sorry about that, but I appreciate your giving me the chance to extend
and clarify my remarks.
> It's not clear to me that the function f(x) has to be strictly
> increasing. for example, f(x) = 1/(x+1) is decreasing, but f(f(x))
> is increasing.
My goal was to characterize (using "easy" math) _all_ solutions which
share with g(x) = x^2 + 2 the properties:
- continuous on the real numbers
- even symmetry
- strictly increasing for nonnegative x
I therefore omitted any proof of this last property, though I'll be
happy to discuss its status with respect to the possibility of more
general solutions.
It is certainly possible to obtain more general solutions, for example
by removing the requirement of continuity or even that the function be
defined at x=0. Since the latter prevents the functional equation
f(f(x)) = g(x) from being even defined at x = 0, in my opinion such
solutions are "inferior" to ones which are continuous on all real
numbers.
As far as f(x) being strictly increasing on the nonnegative real
numbers goes, let's start with the observation that g(x) is 1-1 on the
nonnegative real numbers, which follows from the existence of its
inverse g?¹ on [2,+oo).
Then f(x) must also be 1-1 on the nonnegative real numbers, since f(a)
= f(b) would imply g(a) = g(b), and hence a = b.
Consider what it means for a continuous function like f to be 1-1 on
an interval [a,b]. It means (by the Intermediate Value Theorem) that
f is strictly monotone, whether increasing or decreasing, on [a,b].
By extension this is so (f strictly monotone) on all nonnegative real
numbers
Of course as your example f(x) = 1/(x+1) shows, the composition of a
strictly decreasing function with itself does produce a strictly
increasing result. However in this case we can prove that f(x) is
strictly increasing on the nonnegative real numbers, given the other
assumptions of continuity and evenness.
There are two cases to consider. Either f(0) = a > 2 or a < 2.
Since f(a) = g(0) = 2 and f(2) = g(a) = a^2 + 2 > 2, we have:
A) if a > 2, then f is strictly decreasing on [2,a] & thus on [0,+oo)
B) if a < 2, then f is strictly increasing on [a,2] & thus on [0,+oo)
But if A) and f were strictly decreasing on [0,+oo), then f would be
bounded above on the nonnegative real numbers by f(0) = a. Invoking
the evenness of f would be bound f above by a for all real numbers.
Hence g would be bounded above by a. But since g(x) = x^2 + 2 is not
bounded above, that would be a contradiction.
While we're at it, we may as well complete the demonstration that:
f(0) = a > 0
Clearly a = 0 cannot be true, since that gives this contradiction:
0 = a = f(0) = f(a) = 2
Suppose, also for sake of contradiction, that a < 0. Then since:
f(2) = f(g(0)) = g(f(0)) = a^2 + 2 > 2
f changes signs on [0,2], and hence has root f(b) = 0 for some b in
[0,2] by the Intermediate Value Theorem. But then g(b) = f(f(b)) =
f(0) = a < 0, which contradicts that g(b) = b^2 + 2 is not less than
2.
> I think there should be an analytic solution. Perhaps [it] can be
> constructed your way. But perhaps not. It might be possible to
> construct an analytic function by taking some limit of your functions,
> and values of a. But I can't even construct a solution with a
> continuous first derivative.
Requiring the solution to be analytic is not likely to be enough to
guarantee uniqueness. For a discussion (using substantial
mathematical machinery, but with some really good diagrams, such as
unfortunately I cannot provide directly in this text-based medium) of
the issue of which solution of the functional equation is "best", see
this link to a PDF download of a research paper by Resch, Stenger, and
Waldvogel that appeared in 2000:
[ Functional Equations related to the Iteration of Functions]
(Aequationes Math. 60, 2000, 25-37)
http://www.sam.math.ethz.ch/~waldvoge/Papers/functequ.html
They develop a general theory, give an analytical solution (Taylor series)
for one specific problem, and discuss "the question of selecting
distinguished solutions from an continuum of possible solutions."
The computation of an analytic solution is best approached through a
change of domain and range. Note that if h(x) = 1/f(1/x), then h
satisfies a functional equation:
x^2
h(h(x)) = 1/f(1/h(x)) = 1/f(f(1/x)) = 1/(x^-2 + 2) = --------
2x^2 + 1
Basically we've made a similarity transformation that centers
attention at the origin being a fixed point for h(x), where previously
the equivalent fixed point for f(x) would have to be "at infinity".
In short one way to pick a "best" solution f(x) is to specify the one
with the most regular behavior at infinity.
Before pursuing further the topic of analytic solutions, let's
construct by the elementary method outlined above "a solution with a
continuous first derivative".
Leave f(0) = a as a free parameter, 0 < a < 2, and define f:[0,a] --> [a,2]:
f(x) = ((2 - a)/a^2) * x^2 + a
Then f:[a,2] --> [2,a^2 + 2] has this first-degree polynomial formula:
f(x) = (a^2)(x - a)/(2 - a) + 2
The first derivative of f(x) as we approach x = a from below is:
LIMIT (2(2 - a)x)/(a^2) = 2(2 - a)/a
x -> a-
while the first derivative of f(x) approaching x = a from above is constantly:
(a^2)/(2 - a)
Naturally we will choose free parameter a to make these derivatives agree:
2(2 - a)/a = (a^2)/(2 - a)
a^3 = 2(a - 2)^2
a^3 - 2a^2 + 8a - 8 = 0
and Cardano's formula gives an explicit value for the root of interest:
a = (2/3) + c - 20/(9c) where c = ( 4*SQRT(69)/9 + (44/27) )^(1/3)
and numerically a = 1.139680581996106...
> If you have any other thoughts, about whether there should be a smooth
> solution, or even a way to solve this numerically, I would be happy to
> hear them.
Yes, there should be a smooth solution in the sense of a series
expansion "around infinity".
An expository paper on the general topic of "fractional iterates" of a
general quadratic is:
R.E. Rice, B. Schweizer & A. Sklar,
When is f(f(z)) = az^2 + bz + c for all complex z?
Amer. Math. Monthly 87 (1980), 252-263
The demonstrate that if (b - 1)^2 - 4ac =< 1, then for every integer r
> 1 there exists solutions such that f^r = g, ie. chain composing r
copies of f gives g.
Note that in your case a = 1, b = 0, and c = 2, and that the inequality:
(b - 1)^2 - 4ac = 1 - 8 =< 1
is certainly satisfied.
To clarify any doubts about whether the definition I've outlined can
produce explicit values for a solution, I propose that you pick:
- some value a between 0 and 2
- some strictly increasing function f:[0,a] --> [a,2]
- some argument x
Provided the function f and its inverse are reasonably explicit, I
will then evaluate the function f at your chosen argument x.
regards, mathtalk-ga
|
Clarification of Answer by
mathtalk-ga
on
28 Jul 2005 22:53 PDT
There are some loose threads I'd like to tie up in connection with
this Question, having to do with the existence of solutions to:
f(f(x)) = g(x)
where g(x) = x^2 + 2, or more generally some other quadratic.
The theme that I'd like to pick up is the importance of specifying
a domain for f (and hence for g) in posing such problems.
The paper by Rice, Schweizer, and Sklar which we referenced above
has as its first theorem the answer for every quadratic g(x) to
existence of functions of a complex variable:
Thm. No f:C -> C satisfies the above equation for all x in C.
Their proof uses a simple combinatorial approach that assumes no
analyticity or even continuity of the proposed solutions. The
complete proof would deal with a few special cases, but we can
explain how the proof applies to g(x) = x^2 + 2 directly.
The fixed points of g(x) = x^2 + 2 are the two roots of this:
g(x) = x
x^2 - x + 2 = 0
x = (1 ± i SQRT(7))/2
We are interested in showing that g(x) has one and only one pair
of points a,b such that:
g(a) = b, g(b) = a, a not equal to b
These points are in fact the two roots of the quartic equation:
g(g(x)) = x
(x^2 + 2)^2 + 2 = x
that are _not_ fixed points of g, ie. that are different from
(1 ± i SQRT(7))/2.
We may refer to the set {a,b} as a two-cycle (or orbit of length
2) as g transposes them. Next we consider what values f takes on
this set.
Since f(a) = f(g(b)) and f(b) = f(g(a)), it follows from the
commutativity of f and g that:
g(g(f(a))) = f(g(g(a))) = f(a)
so f(a) is a root of g(g(x)) = x and likewise so is f(b). But
g(a) = b, g(b) = a implies neither f(a) nor f(b) can be fixed
points of g, since if g(f(a)) = f(a), then applying f to both
sides:
f(g(f(a))) = f(f(a))
g(g(a)) = g(a)
a = b (contradiction)
Since f(a), and similarly f(b), is not a fixed point of g, {f(a),f(b)}
is by implication the set {a,b}. We could not have f(a) = a, since
this would imply g(a) = a, so instead it must be that f(a) = b. But
this implies f(b) = a, and hence g(a) = f(f(a)) = a. By construction
a is known not to be a fixed point of g. Contradiction.
Thus there is no functional square root of g(x) = x^2 + 2 define on the
(entire) complex domain, so it makes sense to look more carefully at
solutions on the real domain.
We quoted earlier from a theorem which Rice, Schweizer, and Sklar
give (without proof) at the end of their cited paper. They give
a result that establishes that if g(x) = ax^2 + bx + c has real
coefficients and d = (b - 1)^2 - 4ac, then there is a functional
square root over the real domain if and only if d <= 1.
The negative result for d > 1 can be established by the same
logic that guided our treatment of the complex domain. We will
illustrate the argument for the apparently "nice" case:
g(x) = x^2 - 2
presented earlier with a "formal solution":
f(2 cos(t/2)) = 2 cos(SQRT(2)*t/2)
A bit of algebra shows that the two fixed points of g(x):
g(x) = x
are x = 2,-1. Furthermore the fixed points of g(g(x)) that are
not fixed points of g(x) are real roots of this quadratic:
x^2 + x - 1 = 0
-1 ± SQRT(5)
x = --------------
2
As argued before (or as one can directly calculate), this pair
of values x = a,b has the property that g(a)=b and g(b)=a. They
thus form a two-cycle under the action of g, and it follows that
no functional square root of g can be satisfactorily defined on
{a,b}.
But what is wrong with the definition of "formal solution" f as
presented above? The first point to make is that f is not well-
defined; that is the equation above that supposedly defines f
does not assign unique values for f(x) given a definite value of
x. In brief the problem is that cos(t/2) has period 4pi, but
unless k is an integer, cos(k * t/2) will not have period 4pi.
The "result" 2 cos(SQRT(2)*t/2) will therefore depend on which
"representative" t is chosen to make x = 2 cos(t/2).
Efforts to repair the definition are doomed to failure on the
interval [-2,2], which is where the cosine function would seem
to apply above, essentially because this "region" of the real
numbers contains the two-cycle derived above.
On the other hand using the hyperbolic cosine to assign values
for x >= 2 and/or x <= -2 does work (because 2 cosh(t/2) is not
periodic on the reals, or rather has a period of imaginary
modulus).
The more explicit construction given by David G. Cantor that we
cited earlier is also limited to values outside [-2,2] since his
formulation uses terms like SQRT(x^2 - 4).
This echoing note on the importance of the domain in posing a
question about the existence of f(x) s.t. f(f(x)) = g(x) is a good
place to stop.
regards, mathtalk-ga
|