Google Answers Logo
View Question
 
Q: Math question ( Answered 4 out of 5 stars,   13 Comments )
Question  
Subject: Math question
Category: Science > Math
Asked by: antthrush-ga
List Price: $2.00
Posted: 12 Jul 2005 22:31 PDT
Expires: 11 Aug 2005 22:31 PDT
Question ID: 542904
Someone asked me the following question. The person who told it to him
said there was an easy answer, but neither of us can figure it out!
The question is
What function satisfies
f(f(x)) = x^2 + 2
Answer  
Subject: Re: Math question
Answered By: mathtalk-ga on 15 Jul 2005 21:54 PDT
Rated:4 out of 5 stars
 
Hi, antthrush-ga:

Let g(x) = x^2 + 2.  This is a continuous function on the real
numbers, it has "even" symmetry, g(-x) = g(x), and it is strictly
increasing for nonnegative x.

We can show how to define/construct all possible solutions to:

  f(f(x)) = g(x)

which share these characteristics.  The construction is pretty easy,
but in a sense it uses mathematical induction.

First off let's consider what f(0) might be.  Since g(0) = 2, it's not
hard to convince oneself that for f to be an even function and
strictly increasing on the nonnegative real numbers, we will need f(0)
> 0.  Let's call f(0) = a.

Note that f(a) = f(f(0)) = g(0) = 2, and since f is strictly increasing:

  0 < a  implies  a = f(0) < f(a) = 2

In fact we have an increasing sequence of real numbers:

  x_0 = 0, x_1 = a, x_2 = 2, ... , x_k = f(x_{k-1})

with the property (since f(f(x)) = g(x)) that:

  x_{k+2} = g(x_k) = (x_k)^2 + 2

Clearly the repeated applications of g(x) guarantee that the x_k's
will increase without limit (tend to positive infinity), so that every
nonnegative real number belongs to exactly one interval [x_k,
x_{k+1}).

The good news is that we can define f(x) to be any strictly increasing
function at all that maps [0,a] onto [a,2], and then the rest of the
definition of f(x) will automatically follow, giving a solution to
f(f(x)) = g(x).

Since the definition of f(x) on [0,a] = [x_0,x_1] determines its
definition for all nonnegative real numbers (and thus, because of its
evenness, on all real numbers), you will hopefully not be shocked to
learn that this definition is "piecemeal" in the sense of the
following:

            /
            |    (f?¹(x))^2 + 2  for x in [a,2] = [x_1,x_2]
  f(x) =   < 
            |    g?(f(g??(x)))   for x in [x_{2n},x_{2n+2}]
            \

A few words may clarify the definition.  Since f is chosen to be
continuous and strictly increasing in mapping [0,a] to [a,2] =
[x_1,x_2], it has a continuous (and monotone increasing) inverse f?¹.
That first formula is then a matter of "one step backwards, two steps
forward" as we apply g to the result of f?¹, so for x in [a,2], we
move backwards once to [0,a], then ahead twice (of f, once of g) into
[2, a^2 + 2] = [x_2,x_3].

Then f:[x_{2k,x_{2k+2}] --> [x_{2k+1},x_{2k+2}] is defined by pulling
x back to [0,2] with k applications of g?¹, applying f there and
advancing again with k applications of g.

The continuity of this patchwork f depends on its continuity on each
of the pieces [x_k,x_{k+1}], together with the consistency of f at the
boundaries between the pieces.  But f agrees at the boundary between
[0,a] and [a,2], because on the lower subinterval we have chosen
function f so that f(a) = 2, and on the upper subinterval, taking
f?¹(a) = 0 and then g(0) = 2 gives the same answer.

The consistency of f at all other boundaries (and thus f's continuity)
is a corollary of this agreement.

We give a couple of examples to illustrate that f can be defined
flexibly on the first subinterval [0,a], but we simplify the notation
from hereout by taking a = 1.  Any value for a between 0 and 2 is
feasible, just more effort to express.

Example 1.
----------

Let f:[0,1] --> [1,2] be defined by f(x) = x + 1.

Then f:[1,2] --> [2,3] is defined by f(x) = x^2 - 2x + 3,
                                                  _____
     f:[2,3] --> [3,6] is defined by f(x) = x + 2?x - 2 + 1,
                                               _____
and  f:[3,6] --> [6,11] by f(x) = x^2 - 4(x+1)?x - 2 + 6x - 5.

Example 2.
----------

Let f:[0,1] --> [1,2] be defined by f(x) = x^2 + 1.

Then f:[1,2] --> [2,3] is defined by f(x) = x + 1,

     f:[2,3] --> [3,6] is defined by f(x) = x^2 - 2x + 3, 
                                        _____
and  f:[3,6] --> [6,11] by f(x) = x + 2?x - 2 + 1.

The general formulation simplifies in both cases in that the formulas
on any particular interval [x_k,x_{k+1}] can be expressed "by
radicals" because it is in fact a composition of the polynomial
functions f and g and the easily expressed inverse g?¹:
            _____
  g?¹(x) = ?x - 2

Final Remarks
=============

The two examples above share a strange affinity.  What is it and why
does it happen?

Neither of the examples above give differentiability of f at the
boundary "knots" x_k.  What "knot", and what would the "smoothest"
possible solution be, characterized in terms of a definition f on
[0,a]?


regards, mathtalk-ga

Request for Answer Clarification by antthrush-ga on 16 Jul 2005 13:20 PDT
Hi Mathtalk,
Your solution is a good start, but not terribly satisfying. 
It's not clear to me that the function f(x) has to be strictly
increasing. for example, f(x) = 1/(x+1) is decreasing, but f(f(x)) is
increasing. I think there should be an analytic solution. Perhaps in
can be constructed your way. But perhaps not. It might be possible to
construct an analytic function by taking some limit of your functions,
and values of a. But I can't even construct a solution with a
continuous first derivative.
If you have any other thoughts, about whether there should be a smooth
solution, or even a way to solve this numerically, I would be happy to
hear them.
--Antthrush

Clarification of Answer by mathtalk-ga on 16 Jul 2005 23:17 PDT
Antthrush-ga wrote:

> Hi Mathtalk,
> Your solution is a good start, but not terribly satisfying. 

Sorry about that, but I appreciate your giving me the chance to extend
and clarify my remarks.

> It's not clear to me that the function f(x) has to be strictly
> increasing. for example, f(x) = 1/(x+1) is decreasing, but f(f(x))
> is increasing.

My goal was to characterize (using "easy" math) _all_ solutions which
share with g(x) = x^2 + 2 the properties:

  - continuous on the real numbers

  - even symmetry

  - strictly increasing for nonnegative x

I therefore omitted any proof of this last property, though I'll be
happy to discuss its status with respect to the possibility of more
general solutions.

It is certainly possible to obtain more general solutions, for example
by removing the requirement of continuity or even that the function be
defined at x=0.  Since the latter prevents the functional equation
f(f(x)) = g(x) from being even defined at x = 0, in my opinion such
solutions are "inferior" to ones which are continuous on all real
numbers.

As far as f(x) being strictly increasing on the nonnegative real
numbers goes, let's start with the observation that g(x) is 1-1 on the
nonnegative real numbers, which follows from the existence of its
inverse g?¹ on [2,+oo).

Then f(x) must also be 1-1 on the nonnegative real numbers, since f(a)
= f(b) would imply g(a) = g(b), and hence a = b.

Consider what it means for a continuous function like f to be 1-1 on
an interval [a,b].  It means (by the Intermediate Value Theorem) that
f is strictly monotone, whether increasing or decreasing, on [a,b]. 
By extension this is so (f strictly monotone) on all nonnegative real
numbers

Of course as your example f(x) = 1/(x+1) shows, the composition of a
strictly decreasing function with itself does produce a strictly
increasing result.  However in this case we can prove that f(x) is
strictly increasing on the nonnegative real numbers, given the other
assumptions of continuity and evenness.

There are two cases to consider.  Either f(0) = a > 2 or a < 2.

Since f(a) = g(0) = 2 and f(2) = g(a) = a^2 + 2 > 2, we have:

A) if a > 2, then f is strictly decreasing on [2,a] & thus on [0,+oo)

B) if a < 2, then f is strictly increasing on [a,2] & thus on [0,+oo)

But if A) and f were strictly decreasing on [0,+oo), then f would be
bounded above on the nonnegative real numbers by f(0) = a.  Invoking
the evenness of f would be bound f above by a for all real numbers. 
Hence g would be bounded above by a.  But since g(x) = x^2 + 2 is not
bounded above, that would be a contradiction.

While we're at it, we may as well complete the demonstration that:

    f(0) = a > 0

Clearly a = 0 cannot be true, since that gives this contradiction:

    0 = a = f(0) = f(a) = 2

Suppose, also for sake of contradiction, that a < 0.  Then since:

    f(2) = f(g(0)) = g(f(0)) = a^2 + 2 > 2

f changes signs on [0,2], and hence has root f(b) = 0 for some b in
[0,2] by the Intermediate Value Theorem.  But then g(b) = f(f(b)) =
f(0) = a < 0, which contradicts that g(b) = b^2 + 2 is not less than
2.

> I think there should be an analytic solution. Perhaps [it] can be
> constructed your way. But perhaps not. It might be possible to
> construct an analytic function by taking some limit of your functions,
> and values of a. But I can't even construct a solution with a
> continuous first derivative.

Requiring the solution to be analytic is not likely to be enough to
guarantee uniqueness.  For a discussion (using substantial
mathematical machinery, but with some really good diagrams, such as
unfortunately I cannot provide directly in this text-based medium) of
the issue of which solution of the functional equation is "best", see
this link to a PDF download of a research paper by Resch, Stenger, and
Waldvogel that appeared in 2000:

[ Functional Equations related to the Iteration of Functions]
(Aequationes Math. 60, 2000, 25-37)
http://www.sam.math.ethz.ch/~waldvoge/Papers/functequ.html

They develop a general theory, give an analytical solution (Taylor series)
for one specific problem, and discuss "the question of selecting
distinguished solutions from an continuum of possible solutions."

The computation of an analytic solution is best approached through a
change of domain and range.  Note that if h(x) = 1/f(1/x), then h
satisfies a functional equation:
                                                           x^2
   h(h(x)) = 1/f(1/h(x)) = 1/f(f(1/x)) = 1/(x^-2 + 2) = --------
                                                        2x^2 + 1

Basically we've made a similarity transformation that centers
attention at the origin being a fixed point for h(x), where previously
the equivalent fixed point for f(x) would have to be "at infinity". 
In short one way to pick a "best" solution f(x) is to specify the one
with the most regular behavior at infinity.

Before pursuing further the topic of analytic solutions, let's
construct by the elementary method outlined above "a solution with a
continuous first derivative".

Leave f(0) = a as a free parameter, 0 < a < 2, and define f:[0,a] --> [a,2]:

    f(x) = ((2 - a)/a^2) * x^2 + a

Then f:[a,2] --> [2,a^2 + 2] has this first-degree polynomial formula:

    f(x) = (a^2)(x - a)/(2 - a)  +  2

The first derivative of f(x) as we approach x = a from below is:

    LIMIT  (2(2 - a)x)/(a^2) = 2(2 - a)/a
   x -> a-

while the first derivative of f(x) approaching x = a from above is constantly:

     (a^2)/(2 - a)

Naturally we will choose free parameter a to make these derivatives agree:

     2(2 - a)/a = (a^2)/(2 - a)

     a^3 = 2(a - 2)^2

     a^3 - 2a^2 + 8a - 8 = 0

and Cardano's formula gives an explicit value for the root of interest:

a = (2/3) + c - 20/(9c) where c = ( 4*SQRT(69)/9 + (44/27) )^(1/3)

and numerically a = 1.139680581996106...

> If you have any other thoughts, about whether there should be a smooth
> solution, or even a way to solve this numerically, I would be happy to
> hear them.

Yes, there should be a smooth solution in the sense of a series
expansion "around infinity".

An expository paper on the general topic of "fractional iterates" of a
general quadratic is:

R.E. Rice, B. Schweizer & A. Sklar,
   When is f(f(z)) = az^2 + bz + c for all complex z?
   Amer. Math. Monthly 87 (1980), 252-263

The demonstrate that if (b - 1)^2 - 4ac =< 1, then for every integer r
> 1 there exists solutions such that f^r = g, ie. chain composing r
copies of f gives g.

Note that in your case a = 1, b = 0, and c = 2, and that the inequality:

  (b - 1)^2 - 4ac = 1 - 8 =< 1

is certainly satisfied.

To clarify any doubts about whether the definition I've outlined can
produce explicit values for a solution, I propose that you pick:

  - some value a between 0 and 2

  - some strictly increasing function f:[0,a] --> [a,2]

  - some argument x

Provided the function f and its inverse are reasonably explicit, I
will then evaluate the function f at your chosen argument x.


regards, mathtalk-ga

Clarification of Answer by mathtalk-ga on 17 Jul 2005 23:16 PDT
I'd like to comment on some of the interesting Comments posted to this thread.

reinedd-ga made a suggestion that the problem statement is perhaps
flawed, and should be f(f(x)) = x^(2+2) = x^4, which as hfshaw-ga
points out would have a simple solution f(x) = x^2.

Curiously enough there is also a fairly compact solution for the modified problem:

  f(f(x)) = x^2 - 2

This polynomial g(x) = x^2 - 2 is related to the Chebyshev polynomial
P_2 by a simple "normalization".  One often finds the definition:

  P_n(cos(t)) = cos(n*t)

for this family of orthogonal polynomials.  If instead one defines:

  F_n(2*cos(t/2)) =  2*cos(n*t/2)

then F_2(x) = 2*(2*(x/2)^2 - 1) = x^2 - 2 = g(x).

But the family of Chebshev polynomials has the property:

  P_m(P_n(x)) = P_mn(x)

which is "similarly" shared by the family F_n.  So one can formally
assert that a solution to f(f(x)) = x^2 - 2 is given by:
     _             _                      _
  F_?2(x) where F_?2(2*cos(t/2)) = 2*cos(?2 * t/2)

In other words:
     _             _
  F_?2(x) = 2*cos(?2 * arccos(x/2))

For the solution of this modified problem I am indebted to David G.
Cantor's March 1985 post (to an ARPAnet list?), available in Google
Groups in a slightly redacted form here:

[PROLOG Digest V3 #11]
http://groups-beta.google.com/group/net.lang.prolog/msg/746e409319986d69

"The solution is essentially contained in the article by Michael
Restivo in the March 20 issue of Prolog Digest."

One is tempted to conceive that the original problem statement might
have been in this easier form.  Even so I would like to present a full
solution to the problem as it has been assigned to us by antthrush-ga.

regards, mathtalk-ga

Request for Answer Clarification by antthrush-ga on 18 Jul 2005 11:31 PDT
Now you're talking. This solution is much better. In fact, I think it
gives the solution to my original problem.

If we take
f(x) = a Cos[ sqrt(2) ArcCos( x/a ) ]
then 
f(f(x)) = a Cos[ sqrt(2) ArcCos( cos (sqrt(2) ArcCos(x/a) ]
= a Cos[2 ArcCos(x/a) ] = 2 x^2/a - a

thus we take a = -2 and x-> I x. so the solution is

f(x) = -2 Cos [ sqrt(2) ArcCos( - i x/a) ]

I don't even think there's an ambiguity about the definition of ArcCos here, since 
arccos( i x) = pi/2 + i arcsinh(x), 
and arcsinh is well defined.

thanks for your help!

Clarification of Answer by mathtalk-ga on 18 Jul 2005 21:01 PDT
Hi, antthrush-ga:

The solution of f(f(x)) = g(x) = x^2 - 2 does allow equally the solution of:

  -g(-x) = -(x^2) + 2

but I do not think we can change x |--> ix afterwards within the
context of the problem.

Notice that the formal definition:
                         _
  f(2 cos(t/2)) = 2 cos(?2 * t/2)

only provides values for f(x) on [-2,2].  However the hyperbolic
cosine satisfies the same "double angle" identity as cosine:

  cosh(2t) = 2 cosh^2(t) - 1

and so the definition may be extended outside of [-2,2] by simply
replacing the circular function by its hyperbolic counterpart.

To apply this technique to solving g(x) = x^2 + 2 would require that
we have a function satisfying (say):

   h(2t) = 2 (h(t))^2 + 1

Unfortunately the closest that elementary transcendental functions can
bring us to such an identity is probably this one that mixes two
different functions:

   cosh(2t) =  2 sinh^2(t) + 1

Moreover, if we take t = 0 in the functional equation above for h, we get:

   2(h(0))^2 - h(0) + 1 = 0
                                                 _
whose only solutions are complex:  h(0) = (1 ± i?7)/4.

This, together with your impulse to substitute x |--> ix, points to
expanding the search for solutions from real valued to complex valued
functions.


regards, mathtalk-ga

Clarification of Answer by mathtalk-ga on 28 Jul 2005 22:53 PDT
There are some loose threads I'd like to tie up in connection with
this Question, having to do with the existence of solutions to:

  f(f(x)) = g(x)
  
where g(x) = x^2 + 2, or more generally some other quadratic.

The theme that I'd like to pick up is the importance of specifying
a domain for f (and hence for g) in posing such problems.

The paper by Rice, Schweizer, and Sklar which we referenced above
has as its first theorem the answer for every quadratic g(x) to
existence of functions of a complex variable:

Thm.  No f:C -> C satisfies the above equation for all x in C.

Their proof uses a simple combinatorial approach that assumes no
analyticity or even continuity of the proposed solutions.  The
complete proof would deal with a few special cases, but we can
explain how the proof applies to g(x) = x^2 + 2 directly.

The fixed points of g(x) = x^2 + 2 are the two roots of this:

  g(x) = x
  
  x^2 - x + 2 = 0
  
  x = (1 ± i SQRT(7))/2
  
We are interested in showing that g(x) has one and only one pair
of points a,b such that:

  g(a) = b, g(b) = a, a not equal to b

These points are in fact the two roots of the quartic equation:

  g(g(x)) = x
  
  (x^2 + 2)^2 + 2 = x
  
that are _not_ fixed points of g, ie. that are different from
(1 ± i SQRT(7))/2.

We may refer to the set {a,b} as a two-cycle (or orbit of length
2) as g transposes them.  Next we consider what values f takes on
this set.

Since f(a) = f(g(b)) and f(b) = f(g(a)), it follows from the
commutativity of f and g that:

  g(g(f(a))) = f(g(g(a))) = f(a)

so f(a) is a root of g(g(x)) = x and likewise so is f(b).  But
g(a) = b, g(b) = a implies neither f(a) nor f(b) can be fixed
points of g, since if g(f(a)) = f(a), then applying f to both
sides:

  f(g(f(a))) = f(f(a))

    g(g(a))  =  g(a)

        a    =    b     (contradiction)

Since f(a), and similarly f(b), is not a fixed point of g, {f(a),f(b)}
is by implication the set {a,b}.  We could not have f(a) = a, since
this would imply g(a) = a, so instead it must be that f(a) = b.  But
this implies f(b) = a, and hence g(a) = f(f(a)) = a.  By construction
a is known not to be a fixed point of g.  Contradiction.

Thus there is no functional square root of g(x) = x^2 + 2 define on the
(entire) complex domain, so it makes sense to look more carefully at
solutions on the real domain.

We quoted earlier from a theorem which Rice, Schweizer, and Sklar
give (without proof) at the end of their cited paper.  They give
a result that establishes that if g(x) = ax^2 + bx + c has real
coefficients and d = (b - 1)^2 - 4ac, then there is a functional
square root over the real domain if and only if d <= 1.

The negative result for d > 1 can be established by the same
logic that guided our treatment of the complex domain.  We will
illustrate the argument for the apparently "nice" case:

  g(x) = x^2 - 2

presented earlier with a "formal solution":

  f(2 cos(t/2)) = 2 cos(SQRT(2)*t/2)

A bit of algebra shows that the two fixed points of g(x):

  g(x) = x

are x = 2,-1.  Furthermore the fixed points of g(g(x)) that are
not fixed points of g(x) are real roots of this quadratic:

  x^2 + x - 1 = 0

                 -1 ± SQRT(5)
            x = --------------
                      2

As argued before (or as one can directly calculate), this pair
of values x = a,b has the property that g(a)=b and g(b)=a.  They
thus form a two-cycle under the action of g, and it follows that
no functional square root of g can be satisfactorily defined on
{a,b}.

But what is wrong with the definition of "formal solution" f as
presented above?  The first point to make is that f is not well-
defined; that is the equation above that supposedly defines f
does not assign unique values for f(x) given a definite value of
x.  In brief the problem is that cos(t/2) has period 4pi, but
unless k is an integer, cos(k * t/2) will not have period 4pi.

The "result" 2 cos(SQRT(2)*t/2) will therefore depend on which
"representative" t is chosen to make x = 2 cos(t/2).

Efforts to repair the definition are doomed to failure on the
interval [-2,2], which is where the cosine function would seem
to apply above, essentially because this "region" of the real
numbers contains the two-cycle derived above.

On the other hand using the hyperbolic cosine to assign values
for x >= 2 and/or x <= -2 does work (because 2 cosh(t/2) is not
periodic on the reals, or rather has a period of imaginary 
modulus).

The more explicit construction given by David G. Cantor that we
cited earlier is also limited to values outside [-2,2] since his
formulation uses terms like SQRT(x^2 - 4).

This echoing note on the importance of the domain in posing a
question about the existence of f(x) s.t. f(f(x)) = g(x) is a good
place to stop.


regards, mathtalk-ga
antthrush-ga rated this answer:4 out of 5 stars
Great job researching and explaining the answer. But I still think
there's an analytic solution...

Comments  
Subject: Re: Math question
From: reinedd-ga on 13 Jul 2005 06:55 PDT
 
could the function be f(f(x)) = x^(2+2)
Subject: Re: Math question
From: antthrush-ga on 13 Jul 2005 11:22 PDT
 
Do you mean f(f(x))=x^4? 
The question is what is f(x), not what is f(f(x)).
Subject: Re: Math question
From: biophysicist-ga on 14 Jul 2005 05:22 PDT
 
I thought about it for a little while, and I don't see any obvious
answer.  I also don't know a general procedure for solving such
problems.  Could your friend have been mistaken when he said it was
easy?

f(x) can't be simply a polynomial in x.  If f were quadratic, f(f(x))
would involve x^4.  If f were linear, f(f(x)) would also be linear. 
So f would have to be something more complicated.

If you do find out the answer, please post it here so I'll know what it is!
Subject: Re: Math question
From: juanitin-ga on 14 Jul 2005 12:42 PDT
 
I have a rather straightforward solution but I don't know whether it
is the one you are interested in. We can define the following function
separately in two regions:
     
     f(x)= x+j*sqrt(2) if Im(x)=0; f(x)=x*(x^*) if Im(x) is not zero
where x^* means complex conjugate and Im(x) means imaginary part of x and j*j=1.

Thus, if x is a real variable, we have that f(x)=x+i*sqrt(2), and
f(f(x))=x^2+2. Of course, I have assumed that f is a function of
complex variable, while x is real.
If somebody has any information about how to attack these problems in
a more general form please let me know! I have found the problem
appealing.
Subject: Re: Math question
From: hfshaw-ga on 14 Jul 2005 13:39 PDT
 
I think what reinedd was asking is whether the original questioner
might have misread or misunderstood the problem.  Reinedd was asking
whether the problem involved finding f(x) when f(f(x)) = x^(2+2)
rather than (x^2)+ 2.  The former has a straightforward solution
(i.e., f(x) = x^2).  The latter problem is (despite what juanitin
wrote) is not what most would consider "straightforward"!
Subject: Re: Math question
From: antthrush-ga on 14 Jul 2005 14:35 PDT
 
juanitin's answer is cute, but somewhat unsatisfying. especially since
it doesn't work if x is not real. i don't know if the function is
supposed to be real or not, but i think there's probably a better
solution.
Subject: Re: Math question
From: juanitin-ga on 15 Jul 2005 07:29 PDT
 
I agree with you, the solution I posted is somewhat "ugly".
I have another solution, although probably there is another more
elegant (I'm afraid this one is "ugly" too!). But this time x is real
and the function also,i. e., f:R->R.
The function is the following:

f(x)=lim(t->0) (1+d(e^(t*x^2))/dt)

This is, f(x) is the limit when t->0 of 1 plus the derivative with
respect to t of e^(t*x^2)

The calculation gives:
f(x)=1+x^2
f(f(x))=2+x^2

If I have time I would try something easier like a fraction of two
polynomials, or any other more often used function, but in any case
the function given above is a solution.
Subject: Re: Math question
From: juanitin-ga on 15 Jul 2005 09:26 PDT
 
By the way, I agree with biophysicist. It doesn' seem that a polynomial will work.
PD: when I say I would try something easier, I mean that I will try to
find a function which can be expressed in a simplier and nicer form.
But to me it's more difficult to find such a solution than the ones I
have posted since from the first attempts I made with "easy" functions
I got nothing.
Thanks for the question, antthrush!
Subject: Re: Math question
From: antthrush-ga on 15 Jul 2005 09:46 PDT
 
I don't understand juanitin's solution. Isn't
f(x) = lim 1 + d/dt exp(t x^2) = 1+x^2
in which case f(f(x))= 2 + 2 x^2 +x^4?

A finite polynomial probably won't work. But an infinite series might.
We can write f(x) = sum a_n x^2n, then f(f(x)) is a series. matching
to x^2 + 2 gives a countable series of equations which can be solved,
in principle. then maybe the series can be summed giving a closed form
answer. but i haven't had any luck with this approach.
Subject: Re: Math question
From: juanitin-ga on 16 Jul 2005 15:01 PDT
 
Very good point, mathtalk!
But it seems that antthrush (which was completely right when he said
that my posted solution for real functions was wrong, I made a silly
miscalculation!) looks for a smoother function. In any case I have
found your posted solution a bright answer, although it doesn't cover
all the casuistics as antthrush pointed out. I will think more on that
(and I will be more careful!)
Subject: Re: Math question
From: mathtalk-ga on 20 Jul 2005 08:32 PDT
 
The word "analytic" may suggest something different to interested
readers than what it means in higher mathematics.

An analytic function is one with a derivative in the sense of a
function of a complex variable.  This turns out to be a much stronger
notion than having a derivative in the sense of a function of a real
variable.  Indeed being analytic implies having derivatives of all
orders and a Taylor series expansion in the neighborhood of any point
where the function is analytic.

Often the phrase "analytic function" needs to be qualified as to
where, in the complex plane, the function is analytic.  Functions that
are analytic everywhere in the complex plane are then called "entire"
functions.  Functions that are real-valued on the real numbers and
have convergent power series expansions in a neighborhood of each real
argument are called "real analytic".

This is the sense of my asserting that there is an analytic solution,
not ncessarily a "neat, compactly defined" one, but a convergent power
series, and further that requiring the solution to be analytic will
not make the answer unique.

A nice collection of references to the literature, but unfortunately
few articles available on-line, is here:

[Iterative Roots and Fractional Iteration]
(references collected by Lars Kindermann)
http://reglos.de/lars/ffx.html


regards, mathtalk-ga
Subject: Re: Math question
From: dyscogitator-ga on 12 Sep 2005 07:47 PDT
 
This message thread was recently brought to my attention.  I believe
that in all likelihood I am the "person who told it to him," as
referenced in the original question.  Except that when I first posed
it, I thought that I specified "for all Real x."

  . . . smooth . . . analytic . . . continuous . . . 

Put all your maths away.  You're killing me!  Let's think very simply
about this, and, at the same time, let's generalize:

Let f(#n)(x) denote n iterations of the function f(x).  [e.g. f(#5)(x)
= (f(f(f(f(f(x)))))].  Consider problems of the shape f(#n)(x) = g(x)
in which the domain of f(#n)(x) and the range of g(x) are both Real,
where f(x) is to be determined.

Please consider:

        /
        |  x+(n-1)i for REAL x
f(x) = <   g(x-i)   for REAL (x-i)
        |  x-i      for everything else
        \

Does this not satisfy?
Subject: Re: Math question
From: mathtalk-ga on 12 Sep 2005 09:23 PDT
 
dyscogitator-ga asked: "Does this not satisfy?"

The challenge of this problem lies in finding a way to satisfy the
criterion on the entire domain of f.

The domain of the nth iterate f(#n) of f, to use dyscognitator-ga's
notation, is in fact the same as the domain of f.  It would therefore
not be accurate to claim that "the domain of f(#n)(x) and the range of
g(x) are both Real."  There is no difficulty in finding any number of
solutions where:

  f(f(x)) = x^2 + 2

is satisfied on only a part of the domain of f.

There are piecwise polynomial real-valued functions f which are
continuous and satisfies the equation for all real numbers. 
Continuity can certainly be improved to differentiability, and
possibly to real analyticity.

Notice that the approach of defining a complex-valued function on R by
adding an imaginary constant to shift the argument away from the
reals, to a line parallel in the complex plane, then defining the
"solution" by assigning values there, was offered by juanitin-ga on
July 14, 2005 (see above).

Such a solution bears no essential relationship to the prescribed
real-valued function x^2 + 2.  This technique can supply any desired
function as a "self-composition" on the reals as a subdomain of the
complex numbers, a fact dyscognitator-ga sees as "generalization" but
which the concensus of the Commenters and the Customer made out to be
"unsatisfying".  Note that antthrush-ga (July 14, 2005) wrote
"juanitin's answer is cute, but somewhat unsatisfying. especially
since it doesn't work if x is not real."

regards, mathtalk-ga

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  


Google Home - Answers FAQ - Terms of Service - Privacy Policy