The above stories of additional latency are a bit overblown, too. A low quality switch may add a whopping 3ms of latency. And better ones are measured in hundreds of microseconds. Most will have the switching type and speed listed in their specs, but we're not talking significant numbers here.
In a "two switch network" as you've described, I wouldn't worry about it in the slightest. What I would worry about is number of users who have to cross the cascade link simultaneously.
But again, in your case I believe you have a single server with a single gigabit connection running to it, and all users are sharing that single gigabit pipe to the server equally.
What you'll essentially be doing is creating a bottleneck between the switches of a single 1 gb pipe that if there's two users trying to run full data rate on the non-server switch, they have to share.
But it doesn't matter if the server is also a single 1gb pipe. They'd have to share that anyway.
(In larger environments a server may be fed by a trunk of multiple gig connections or an even faster technology so users are sharing a much bigger pipe to the server itself from the switching matrix of devices.)
I did raise an eyebrow at "there's only three computers working at a time" though... I assume and hope you're saying that there's lots and lots of computers on the network (thus the need for two switches) but only three people are working at once?
Keeping in mind that ...
A) 48 port gigabit switches of medium quality are relatively dirt cheap.
B) Computers run things constantly now even when people aren't actively working on them. The largest culprit would be automatic software update downloads.
I don't see a solid reason to cascade switches unless you've burnt up 48 ports.
24 port switches of medium quality are even cheaper. If you have less than 24 connections to make.
We could also talk about backplane limitations, switching type (store and forward vs cut through, etc). But it's all way overkill for a local gigabit LAN of two switches and one server.
You won't see any problems cascading those two. Just don't keep doing that if you add 100 more workstations. That's when you'll get into trouble.
Do make sure both switches are the same speed, as someone else mentioned, or you need to carefully look at usage cases for the slower switch. It's all in what path the users have to travel to get to the server.
If you have two gb switches cascaded and only a gb to the server anyway -- it falls into the category my VP of Design calls "ZFG". Zero ****s Given. LOL.
Start getting to three switches or a trunked link to the server, now you have to design it.
But in the end, if you have a less than 48 port network, just buy a 48 port switch.
Here's the real kicker for many businesses. Business continuity after a failure. If you use a single switch, do you maintain an on-site or immediately available spare to swap in to keep the critical business functions running?
At least in a two switch environment if they're located in the same physical location, you can swap some cables around and keep half of your systems running while the replacement is being shipped for the dead one.
Bunch of assumptions about your business size and what not above, but I can't think of an environment that would be harmed in any way by having two cascaded switches in the size I'm guessing you're at.