Sorry for the delay in responding--I've been on the road and all, and dealing with hurricanes and what not. In case you haven't resolved this yet, though, here are some suggestions.
Regarding the lines to include the binary file name and the starting position, the --master-data option with mysqldump should put these at the end of the dump file. If not, you can manually run them on the slave:
CHANGE MASTER TO MASTER_LOG_FILE = 'bin.000001';
CHANGE MASTER TO MASTER_LOG_POS = 4;
You can get the correct values for your server by entering this statement on the master:
SHOW MASTER STATUS;
Regarding the user information, if there are already users on the slave--meaning it's not going to be used just for backups--and for security purposes, you probably should add the --ignore-table=mysql.users option to the mysqldump command when backing up the master. This will exclude the users table and thereby the passwords from the dump file. This should take care of the host name problem you mentioned related to the users, as well. If you utilize the other tables in the mysql database, since they contain references to users and hostnames, you should add them to the mysqldump line. You will have to add a separate key/value pair for each: --ignore-table=mysql.xxxx for each. I should have included all of this in the article.
Let me make two more clarification points to my article: don't start the slave replicating until you have the data copied or else it will try to start replicating before you can get the dump file unloaded. The problem is that you need to have MySQL running on the slave so that you can add the dump file. To get around this, after you have the slave configured initially, restart the mysqld daemon (or mysqld_safe) with the --skip-slave-start option. Actually, put that option in the my.cnf file to be safe. Just delete it after replication is running.
There is another point I'd like to add now that I'm reading your comments and a couple others: if your master has been binary logging for quite a while before you tried setting up replication, unless you have use for the logs, you might want to start fresh by issuing a RESET MASTER statement on the master. This will delete all of the binary log files and it will commit any outstanding transactions (e.g., on InnoDB and BDB tables), so be sure that you want to do this. You might want to make an extra backup of the databases and the binary logs before you reset the master, by the way. After you reset the mater, make your backup and copy it to the slave server while replication isn't running. Then start the slave with the START SLAVE statement. Starting fresh makes it easy to be assured of a good clean start. Check the SHOW PROCESSLIST; on both servers to see the states of each. Also, run SHOW SLAVE STATUS; on the slave to see if everything looks okay. It will list the last error number and message if replication failed after starting. Check the error logs for clues if there's a problem.