Hi,
What we need to do here is take much of this out of the equation.
Right now you are trying to transfer data, and then set it up (some
how) so that the stores can read it, and not get it mixed up with data
that just came in, and check stores as they transfer data back and
forth, and hope that some how they aren't getting mixed up. Possible?
yes, but why bother?
What you need in a situation like this is a centralized database,
which the stores and the warehouse talk to in real time, without the
need to use a VPN or some other such setup. Basically, that means a
database server. This use to be quite an expense, but these days it is
pretty easy to get and setup. Its just we aren't use to thinking of
them in these terms.
What we will use here is MySQL, on an ISP that offers MySQL as a
database for a website. To connect to the MySQL database from your
stores and the warehouse, we will use the ODBC driver. What this does
for your Access database (and your Visual Basic programming) is create
the tables in your Access database so that they look and feel to the
user and programmer, just as if they were basic Access tables. Only
they aren't, they are on the ISP server.
The ODBC driver is very easy to get and install. It is located here:
http://www.mysql.com/products/connector/odbc/
And there is a reference manual for it here:
http://dev.mysql.com/doc/mysql/en/odbc.html
Once installed on each of the clients, you connect to the data tables
directly with Access, or with Visual Basic controlling Access. Using
Visual Basic you can setup your backup schedules, and access reports
and just about anything you want to get done.
MySQL is a full database server, so real time reports are now possible
as well. Depending on how you setup the warehouse functions for
instance, your warehouse can be "live" with the MySQL database, and
therefore show live inventory.
The fact that all the stores are using different ISP's is irrelevant,
just so long as they can connect to the internet and to your web
hosting server. No FTP, no SOAP, none of that other stuff.
Web hosting that comes with MySQL as a feature is very inexpensive
these days, in fact I've seen some as low as $10.00 a month
(hummingbird for example).
MySQL itself is very easy to use, and the ISP will already have it
setup for you. Your programmer just needs to setup the tables and the
databases. There are some differences with the date and time fields,
and some other minor things, but nothing you can't work around or
adjust.
Have your programmer look over the MySQL information (also on that
website http://www.mysql.com ) and if there is anything he sees as a
stumbling block, use the Clarification Request here and I'll point him
in the direction for a solution. I've done this setup several times,
and really its very simple.
You can do this with MS_SQL Server, but the costs are normally (for a
small shop like you describe here), too much to be worth the trouble.
Thanks, and if you have any questions on this solution, just use the
clarification request function and I'll address them for you.
webadept-ga |
Clarification of Answer by
webadept-ga
on
30 Mar 2005 06:22 PST
Hi again,
Something I thought about this morning and decided to put in here for
completeness sake, is that MySQL will run on WinXP and act as a
server, with very little setup. More will be required on your
programmer's end, but it can be done, and in fact I do it. Depending
on your setup it could work just as easily as an ISP. What you will
need to do this however is a "static" IP address on the computer which
is running MySQL. Most ISP's these days use DHCP for dynamic IP
addresses. The clients (your stores for example) do not have to have
static IP addresses to make connection to the server, but do need to
know where the server is.
So you can have a complete "in-house" solution using MySQL as well,
but it will take more setup on your end to do so.
webadept-ga
|
Request for Answer Clarification by
bselltiz-ga
on
30 Mar 2005 12:28 PST
I am going to clearly re-define what I need, now that I have read these posts.
1st - SQL server is not an option for us - too expensive. I'll explain
why in a minute
2nd - SOAP sounds interesting but I'm not sure it's the best solution,
again I will clarify.
3rd - MySQL could be an option but I have to wonder if I could get the
same functionality with MS Access.
Here is my entire situation laid out as clearly as possible.
1. This entire problem revolves around the need for different
locations to be able to share information, without any other
interaction from the user than say clicking a button in our Access
application. This communication will take place using the internet,
and can not utilize a private network or VPN.
2. This solution will have to be able to work many times over, for
more than just one company as we are distributing this software to
different companies within our area, each having multiple locations
per company. Most of these companies will not be able to justify the
cost of SQL Server on top of the cost of our software.
3. Our company (the developer) owns a dedicated web server, which is
at our disposal if needed. I am starting to get the idea that there is
no way around our web server being involved in this. I had originally
envisioned a scenario where any of our client's locations (with high
speed access) would be able to act as a server for that particular
company. If this is possible it should be explored before the option
of our server being directly involved.
4. The data has to be at least somewhat secure. By somewhat I mean the
following. The data is currently protected by Access's password
protection and encryption. I know that this is not overly difficult to
break in itself, but none the less, I don't want the data to be less
secure than it currently is.
5. Each location will have specific needs for data as outlined below:
Stores: Will need to be able to pull data from the server. The server
should also be able to push the data to each store, if needed. This
will have to be able to be automated through VBA and Access 2002. The
data will consist of a master set of data that each store will use to
govern it's operations. For example, each store will have the same SKU
assignments for parts because they will be using the information sent
from the server. Please note, this can not be dependant on a live
connection because the software still needs to be able to run even if
the internet connection is not available.
Warehouse: Will need to be able to analyze data that has been sent
from each of the stores. The warehouse should be able to pull this
data on request or use data that has been previously sent by each of
the stores.
Now this is where it could get tricky. Each of the stores uses an
identically structured database. Information from each of these
databases needs to be combined on the server so that data from
multiple stores can be analyzed and reported on across the company.
The tables within each store's database, each have fields that define
what location they originate from. They also each have a unique key.
The current setup first assumes that each of the store's tables have
somehow been uploaded to the server into it's own directory. From that
point an Access application links to each set of tables and then
copies that information into a local master set of tables mirroring
the structure of each store's database. The records are
distinguishable by store because of their key values compromised of he
record ID and the Store ID. This setup works fine but is very
difficult when it comes transferring the data between locations. It
also does not allow for viewing of live data, especially between
stores.
So in addition to the warehouse having to be able to analyze each
store's data, we also want to be able to have the stores hit a button
and see what the other stores have in the inventory. This is not
possible with the current setup.
|
Clarification of Answer by
webadept-ga
on
30 Mar 2005 15:09 PST
Thanks for the clarification. I just got this and it is quite a bit to
go through and address every issue you have presented. On first read
through, the answer is still the same, so it would be helpful for you
or your programmer to start looking into it. I will write up a
responce for you and have it to you tomorrow.
You don't "need" the webserver to be involved. Any computer with a
static IP address can be used, which is generally an option with any
ISP with a small fee. MySQL can be downloaded for windows off the
MySQL website and setup up on any WinXP computer. You can also set
your router to forward the port MySQL uses for connection to another
computer and keep the same IP address you use for your server, while
using another computer for the database server.
As far as the Access compatablity, it has been for me, basically
seemless. MySQL is a solid, very fast database server, which Yahoo
Finance uses, because of the speed and solidness, switching over from
Solaris.
So, give it a look over and I'll go through all of this and write up
howto's for each area.
webadept-ga
|
Clarification of Answer by
webadept-ga
on
30 Mar 2005 15:19 PST
By the way, one of the security options would benifit from each store
having a static IP address, so please check into the availability for
that and let me know if you can what the "hassle factor" is for you to
get static IP's for each store.
webadept-ga
|
Request for Answer Clarification by
bselltiz-ga
on
31 Mar 2005 14:59 PST
We use a utility called DYNU for the static IP problem. It's basically
a client side program that every 30 seconds or so reports that
machine's current IP to a central name server, which translates the
stored DYNU name to the current IP. Even if that didn't work I don't
think the added cost of having a static IP provided to each store by
the ISP would be a problem.
One thing to consider also in your answer is that we will need the
ability to add Company locations at any time, hopefully without much
havoc.
|
Clarification of Answer by
webadept-ga
on
31 Mar 2005 15:47 PST
None of this is much of a problem, but adding future locations is
certainly the least of them. I'm trying to write all of this out for
you, while doing a few other things, and it is coming along. I'll have
it up and posted soon.
webadept-ga
|
Request for Answer Clarification by
bselltiz-ga
on
31 Mar 2005 19:47 PST
ok - sounds good
|
Clarification of Answer by
webadept-ga
on
01 Apr 2005 15:43 PST
1st - SQL server is not an option for us - too expensive. I'll explain
why in a minute
-- No need to explain, it is an expensive piece of software, for any company.
2nd - SOAP sounds interesting but I'm not sure it's the best solution,
again I will clarify.
-- There is nothing wrong with the SOAP solution. The only thing I
would say on this side of it, is it yet another area, which will
require a level of expertise and maintenance. The more different
technologies you have in a system, the harder it is to keep someone
around to maintain the system. It is also harder to find someone to
maintain it if something happens that your current people are no
longer available.
3rd - MySQL could be an option but I have to wonder if I could get the
same functionality with MS Access.
Yes, and I'll get into that in just a bit.
Here is my entire situation laid out as clearly as possible.
1. This entire problem revolves around the need for different
locations to be able to share information, without any other
interaction from the user than say clicking a button in our Access
application. This communication will take place using the internet,
and can not utilize a private network or VPN.
I have to wonder a bit on why it can't use the VPN, but you've said
that enough times that I'm just going to believe you.
Right now you are using Access, which is a database program, but not a
database server. What you need at this point, for many reasons, is a
database server.
A database server is designed to house information from various input
sources, at the same time and respond to multiple requests. Access is
not designed for this. Access is designed for a single user to access
it.
Access allows programs to utilize it as well (such as VB) so you can
'trick' it into working like a server, but it isn't, and it tends to
freeze up quite a bit when you start pushing it.
2. This solution will have to be able to work many times over, for
more than just one company as we are distributing this software to
different companies within our area, each having multiple locations
per company. Most of these companies will not be able to justify the
cost of SQL Server on top of the cost of our software.
No doubt. The expense is very large. You will want to read the MySQL
website for creating programs which utilize the MySQL server and make
sure you are aware of the limitations and license.
3. Our company (the developer) owns a dedicated web server, which is
at our disposal if needed. I am starting to get the idea that there is
no way around our web server being involved in this. I had originally
envisioned a scenario where any of our client's locations (with high
speed access) would be able to act as a server for that particular
company. If this is possible it should be explored before the option
of our server being directly involved.
This is possible, more than possible. MySQL doesn't require much of a
machine to run it. When we are talking about MySQL as a Server, we are
talking about the type of program it is, not the machine it is running
on. There is a huge difference here. A Server program is a Service,
which can listen to and respond to multiple queries and requests at
the same time, without having to lock up tables and que the requests.
There is really no reason you can not have MySQL running on one
machine, as I mentioned earlier. It should be a dedicated machine, but
it doesn't have to be a monster machine with massive amounts of
redundancy. It can be, but doesn't have to be, so you can offer
several levels to the customer. They can supply the machine, or you
could. Really this is an option area, not a problem area.
4. The data has to be at least somewhat secure. By somewhat I mean the
following. The data is currently protected by Access's password
protection and encryption. I know that this is not overly difficult to
break in itself, but none the less, I don't want the data to be less
secure than it currently is.
MySQL is quite a bit more secure than access is. It is designed to
face the exploits and expertise faced on the Internet, not just
someone's "PC" sitting in an office. MySQL was created to be a fast,
light Internet database server. Just about every website you come
across out there on the Internet which has dynamic content (such as
Yahoo Finance for example) and many banking systems, are running MySQL
servers.
Really, to be fair, every real Database Server I've ever worked with
(Solaris, DB2, Informix, MS SQL, AS/400 etc) is more secure than
Access. Access doesn't have to be secure in the way a database on the
Internet needs to be secure.
When you setup MySQL for your offices, you setup the database, and
then the tables inside that database. In your case you would probably
setup a seperate database for each of the stores, which would be
dublicates of each other. Once those are setup, you would grant
privliges to these databases using an SQL statment. Let's say you have
a store in San Diego. We'll call that username sandeigo with an IP
address of 67.67.01.76 .. whatever. Anyway... we would tell MySQL that
the user sandeigo can only connect from the IP address of 67.67.01.76.
grant all on sandiego_db.* to sandeigo@67.67.01.76 identified by "sandiegoword"
Now, even if I have the username, and the password I still can not
access the server because I don't have that IP address.
Another thing you can do, which banking systems do all the time is
"stored procedures". What these are are procedures that are assigned
to a user. We set these up as what functions the user sandiego is able
to do, even if he has connected with the right IP address. So, even if
I do happen to connect, the only thing I can do is what sandiego is
allowed to do.
These aren't User accounts like you are probably use to thinking about
them. They are accounts, which only the program itself is using. No
one is typing in the password. And most of the time, the password is
encrypted using MD5 encryption, which as far as I know is still not
feasibly cracked. In your case, your ODBC connection is setup with the
username and password, which is hidden, and not readable on the
machine. So even if it is my machine, I can only make it "not work at
all" not crack into the database. And again, even if I did, the only
thing I can do is do what the program is allowing me to do anyway.
Okay, I'm still writing.. digest this and I'll be posting again here
soon. Just wanted to get something posted on here as soon as possible.
Many things came up yesterday, but my schedule is cleared right now,
so I'll be working on this until I'm done. If you have questions on
what I've said so far, post them and I'll get to them as I can. Might
be Sunday or Monday for further questions, but I will get to them.
webadept-ga
|
Clarification of Answer by
webadept-ga
on
01 Apr 2005 17:27 PST
"Stores: Will need to be able to pull data from the server. The server
should also be able to push the data to each store, if needed. This
will have to be able to be automated through VBA and Access 2002. The
data will consist of a master set of data that each store will use to
govern it's operations. For example, each store will have the same SKU
assignments for parts because they will be using the information sent
from the server. Please note, this can not be dependant on a live
connection because the software still needs to be able to run even if
the internet connection is not available."
Even if you had a full time completely stable Internet connection you
wouldn't want to work completely off the server, but I would like to
point out that you could. It is a viable option. All of the data could
be on the server all of the time and the clients could work off MySQL
directly. The programming changes for this to happen would be minimal.
In fact, it would be the simple changing of the database table the
program is using most of the time and a recompile. That's it.
When you have connection to MySQL though ODBC or Connector, or
whatever, the tables on the server, show up as accessible to Access,
just like tables on the computer. The little symbol is different, but
that's about it. Really, it is quite awesome.
All that aside, it is not the wisest way to set things up, because
there is no such thing as a completely stable network connection. Sad
but true. On top of that it is a waste of bandwidth with all that
chatter back and forth. I was the system administrator and programmer
of a company in Seattle with offices from Snohomish down to Tacoma,
and we had it set up this way. It worked, but a whole office was down
and had to take notes on paper if the connection went down, which it
did about once a month. Sure, only a few hours, but for those few
hours the office was down.
What you would want to setup with this is a dual entry system. I'll
explain it in simple terms here, but just add the functions you would
want.
A sale comes in. You put it in the computer in the sandiego store.
That sale goes into the Access table it does now, and a smaller entry
goes into a journal table. The only thing the other stores or
warehouse are interested in is the inventory. So you don't need much
information in the journal table. Once the journal table has three or
four entries in it (completely up to you) a function is keyed to send
the entry to the server. If the connection is down, it dies quietly,
no error. It just tries again on the next sale.
Now you have live inventory look-ups for the rest of the stores. When
they look up the inventory, their program is accessing the server
live, and doing the query across the net. These are stored procedures
under one username/password. The journal entries are under another.
Also, they are only 3 to 4 sale entries behind. On closing the que is
cleared up if connection is available, or it tells the user that the
connection is down (during the day the user probably knows if the
connection is down, but shouldn't be bothered anyway. There should be
a function on the server machine that pings the offices, and lets the
administrator know if an office has lost connection via phone page or
email.).
By the way, you might want to look into Tight VNC for remote
administration. (http://www.tightvnc.com/ ). It is fast, secure and
much better in my experience than products like pcAnywhere.
Now, there are tons of other ways to do what I just explained. The
real problem here is the amount of options, not the lack of them.
Depending on your connection speed you could run the journal entry on
every sale, rather than just 3 or 4 at a time. More than likely you
will be able to do this, as it is a background process, and if it
fails, it simple goes into the cue and waits for the next sale and
send both at the same time when it can.
But the user, and the sale are not dependent on it happening, and
really, the worst that happens is store two thinks store one has an
item that just walked out the door. This is easily solved by the
server keeping track of the journal entries. If the last one was a few
hours ago, then the user in store two could call to see what was up
before sending a client over there. I don't know your situation, so
these suggestions are really just things I've ran into over the last
decade, which come up.
--- more to come ---
|
Clarification of Answer by
webadept-ga
on
01 Apr 2005 17:35 PST
"Warehouse: Will need to be able to analyze data that has been sent
from each of the stores. The warehouse should be able to pull this
data on request or use data that has been previously sent by each of
the stores."
I believe I've covered this in the last post. The warehouse would have
a local system and be able to query the inventory journals from all of
the stores live. Again, you would use stored procedures for this and
keep in the journal table only what the warehouse would need,
part-number, item name, amount sold, date sold. They don't need
anything else in there (probably... again I don't know your system ...
but they wouldn't need things like customer name or credit card
numbers).
--- more to come ---
|
Clarification of Answer by
webadept-ga
on
01 Apr 2005 17:49 PST
"Now this is where it could get tricky. Each of the stores uses an
identically structured database. Information from each of these
databases needs to be combined on the server so that data from
multiple stores can be analyzed and reported on across the company.
The tables within each store's database, each have fields that define
what location they originate from. They also each have a unique key."
This isn't tricky at all. First of all, each store would have its own
database. This is where we start thinking about Servers and not
Access, or Paradox or little programs like that. Each of the data
tables are exactly the same, but on the server they are kept separate,
in their own database for several reasons. First is size. The second
is backups. The third is programmer sanity. And the fourth is
functionality (being able to add new stores on the fly).
See, you don't really need all the stores to be in the same database,
what you need is to be able to get reports and access all of the
stores in a report, and compare each of the stores separately and
together.
Using MySQL, you get this with SQL queries, which can access many
databases at the same time and create temporary reporting tables on
the fly. In fact I just answered a question about this not too long
ago.
(http://answers.google.com/answers/threadview?id=498988 )
Using the temporary tables, you can create what you need, when you
need it, and drop it when you don't, without worrying about something
happening to your real data. (read that answer above, where I get into
this more fully).
"create temporary table emplist select * from test2.employee";
"insert into emplist select * from test.employee";
"ALTER TABLE `emplist` ADD INDEX `hrdate` ( `hire_date` )";
"select * from emplist order by hire_date";
Using this with stored procedures is a real safe guard for your data as well.
Having the store number and a unique id is still a very good idea,
don't get rid of that, but "what is really happening" and "what it
looks like" are two different things. I'm not going to get into a
dissertation on database theory here, but just keep in mind, the
smaller the data sets, the better. Making an elephant out of a bunch
of mice isn't always a good thing. However, you can get the elephant
anytime you want it.
-- more to come --
|
Clarification of Answer by
webadept-ga
on
01 Apr 2005 17:58 PST
"The current setup first assumes that each of the store's tables have
somehow been uploaded to the server into it's own directory. From that
point an Access application links to each set of tables and then
copies that information into a local master set of tables mirroring
the structure of each store's database. The records are
distinguishable by store because of their key values compromised of he
record ID and the Store ID. This setup works fine but is very
difficult when it comes transferring the data between locations. It
also does not allow for viewing of live data, especially between
stores."
Yeah, we use to call this "sneaker net" meaning; you put it on disk
and walked it over :-)
During the day you have your journaling going to the server. At
closing you send in the days sales as a long INSERT SQL statement, and
you are done. Back up the stores if you like as well (might be a good
idea to have local backups available) but if for some reason those are
destroyed or blanked, the server could reload the system as well.
You are only sending to the server what you did that day, and the
server is storing it. So the INSERT queries are as small as possible.
The server sends back to the client a huge range of messages to the
program telling it if the information came through correctly or if
there was a problem. The whole process will probably take 5 minutes
(or less) if you make it up right. Again we are working with a Server,
so small is good.
MySQL also has a great little backup utility called mysqldump, which
dumps the whole database to an SQL datafile. Set the server to do this
once a night. If anything happens to the server, you just reload. Very
simple. You will probably be down a whole 10 minutes (unless the
machine caught on fire or something, then it might take a few more
hours :-)
-- more to come --
|
Clarification of Answer by
webadept-ga
on
01 Apr 2005 18:19 PST
"So in addition to the warehouse having to be able to analyze each
store's data, we also want to be able to have the stores hit a button
and see what the other stores have in the inventory. This is not
possible with the current setup."
I believe I've already covered this. But to go over it again, the
connection here would be to the Server asking for a report. You would
utilize the main store tables for history, and then add in the journal
entries to create "current inventory" by creating a temporary table
with the SQL query, and then running the report. The temporary table
would be created with only the fields it needs to create the report.
The best way to do this is to create a "Beginning Inventory Count" in
a table every night with a cron job (or scheduled task on windows XP)
and then just add in the journaling counts as needed for the reports.
This is much easier on the server during the day when it is also doing
other things. Again, there are many different ways of doing the same
thing.
--- think I'm done ---
Okay, I've covered just about everything here. If you have something
else you would like looked at, just post the clarification and I'll
get to it. I'm out of here for the night, and won't be back to "real"
work until Monday, but I might get to it on Sunday. I'll check tonight
before I take off to see if you posted anything that I can answer
before I leave.
MySQL can be a bit daunting when you first start working with it from
the command line. There is a product that gives it a very nice Web
interface called phpMyAdmin. Easy to setup, and very nice to work
with. (http://www.phpmyadmin.net/home_page/index.php)
Also there are other clients which are pretty good as well, such as
MySQL Administrator (http://www.mysql.com/products/administrator/ )
which make life a bit easier.
thanks,
webadept-ga
|
Clarification of Answer by
webadept-ga
on
01 Apr 2005 18:36 PST
Something willcodeforfood-ga brought up below (and by the way thanks
willcodeforfood-ga for the comments and other ideas, always more than
one way to do things eh? And I'll code for pizza as well :-)
Price changes and the like. It is very easy to set that up so that the
stores always have the same information in them for pricing and sales,
and specials.
What you do here is setup a pricing table. The main office accesses
that table and changes data as they see fit, and then the other stores
get that information on the morning "refresh". It is rare that a "sale
price" needs to go into affect the same day it is created, but you can
add that as well. But the real point here is that with a centralized
database server, you can make global changes which all of the stores
get automatically on start up and refresh.
What you don't want to do is pass "all" of their data to them every
day. Just pricing (for example). To do this you have a product list
table which as at most : product id, product price. It is called a
"lookup table" On entry per item.
Another way you could do this is to have the program create a
temporary table which it then downloads, and updates the local table.
But you don't need to pass everything to the client.
Even if you are fixing something like a spelling mistake in the
product name, you do this as simply as possible. Create a query that
looks for changes, and then downloads just the changes, such as the
name. In the program area that the user uses to make the change or add
the new product have it flag those records where a change is made
(last change date, entry date etc) so that the clients can find them.
This is where a great deal of thought needs to come in and some trial
and error. There are times when it is better/easier to just grab the
whole row and UPDATE the client with everything. But the fewer the
better.
webadept-ga
|
Request for Answer Clarification by
bselltiz-ga
on
01 Apr 2005 21:18 PST
I'm heading to Virginia tonight and won't be back until Tuesday, so I
won't be able to give this the attention it deserves until at least
Tuesday night.
I've started reading a little and so far you haven't lost me, so keep
it coming an I will do my best to take it all in when I get back.
Thanks, I'll post again when I get back :->
Brian
|
Clarification of Answer by
webadept-ga
on
02 Apr 2005 09:56 PST
No problem, works out good for my schedule anyway.
Have a great weekend.
webadept-ga
|
Request for Answer Clarification by
bselltiz-ga
on
06 Apr 2005 17:57 PDT
ok, tell me if I have this right:
The warehouse contains a computer acting as a server
This server holds a copy of each store's data, in a MySQL DB for each store.
Each copy links, via the ODBC driver - over the net, to the store's Access DB.
This copy will be updated with all of the store's activity at the end
of business for that store, via table links from the store, up to the
MySQL DB for that store.
Since each store's DB is named the same, will they have to reside in
different folders on the server or am I going to be manually naming
these DB's on creation?
Another MySQL DB, queries all of these store copy DBs, consolidating
the data into a temporary table for reporting.
My access application on the server then links to the main MySQL DB to
get it's data for reporting.
I understand the journal part of it, that makes sense.
Also....instead of having the server's copies of each store's DBs be
MySQL DBs, can they just be access DBs? If so I imagine I could still
implement the journal idea. So I'm basically asking if I can use
Access's table linking technology over the internet.
If I were able to do this, then wouldn't I be able to use my current
setup? I know I would have to implement the journal idea to get real
time inventory stats.
If I am anywhere near on course with the above questions then I don't
understand which DB the stores will be querying, or how for that
matter, when they need their master set of data or the other store's
inventory data.
Sorry if I'm way off base here.
|
Request for Answer Clarification by
bselltiz-ga
on
11 Apr 2005 16:45 PDT
still out there?
|
Clarification of Answer by
webadept-ga
on
12 Apr 2005 06:11 PDT
Hi,
Yes I am... give me a couple hours to clear my desk here and get some
coffee, and I'll continue on with your clarification request.
thanks,
webadept-ga
|
Clarification of Answer by
webadept-ga
on
12 Apr 2005 21:06 PDT
Hi,
Sorry it is late in the day before I'm gettting back to you, but it
looks like willcodeforfood-ga has been here, which I am greatful for.
Simply could not get to this today, no matter how hard I tried.
Anyway...
What willcodeforfood-ga went through for you there is accurate, and
well done. The setup and first look at MySQL ODBC connection might be
a bit daunting, but it is well documented now and the walk through is
pretty straight forward.
As suggested, load the MySQL server on a machine and connected to it
with Access locally. Get to know it, and what it takes to make the
connection.
In Access your MySQL databases will come up as "Linked" tables and
data sources. But other than the name, they don't act much
differently.
There is a very good book you will probably want at your side from
O'Reilly, called "MySQL Cookbook". Best $50.00 you'll spend for a long
time. There are a number of sources on the Internet as well that will
help you with this too.
Paul DuBois wrote a Migration tutorial
http://www.kitebird.com/articles/access-migrate.html
A quick and dirty HOWTO is here
http://www.itc.virginia.edu/desktop/web/mysql/msaccess.html
Another walk through is here
http://entropy.homelinux.org/MySQL/access_and_mysql.html
The book will help out a great deal, and get you through the learning
curve with SQL and so forth. Very well written (no, I didn't write it
:-)
Let me know if you need something else with this.
webadept-ga
|
Clarification of Answer by
webadept-ga
on
14 Apr 2005 15:14 PDT
Hi,
Saw your comments, and just wanted to point out that
willcodeforfood-ga is not, at the present time, a Google Answer
Researcher.
thanks,
webadept-ga
|