No. Realistically you will need a lot more since you are totally neglecting your system’s base memory footprint. Is your system mostly used read-only by unauthenticated users? If so, you can make use of nginx and microcaching to ease the burden.
The memory usage you see is only what PHP itself uses, but things like system memory usage (as Da:Sourcerer says), apache memory usage (around 20mb per thread) or mysql usage are not taken into consideration. Also if you have other services running there, say an elastic search engine or something similar, you’ll need to add those too.
Further, what does 1000 users at the same time mean for you: 1000 user requests in the same second? Distributed over 1 minute? Over 3 million requests per hour?
If you have to serve 1000 users with unique responses every second, your processors have to keep with it too. Suppose that handling one request takes 200ms, so one processor could do almost 5 requests per second and 8 processors could do 40 which means that ideally you would need 50 8 processor machines at that performance. However you should be able to serve about 1000 requests evenly distributed over one minute. In other words, you might have to review your goal(s) and/or think about a cloud, dynamic allocation of "instances" and shared and/or synchronized database servers… .
Again, no. Memory usage isn’t scaling in a linear fashion. You may easily double or even tripple that number. As le_top mentioned, you might have to review your goals. If 100 requests/s really is your mean and you cannot apply microcaching, a single server won’t be sufficient. You’ll have to consider clustering, load balancing and whatnot.