Forged Alliance Forever Forged Alliance Forever Forums 2014-10-28T06:18:38+02:00 /feed.php?f=2&t=5429 2014-10-28T06:18:38+02:00 2014-10-28T06:18:38+02:00 /viewtopic.php?t=5429&p=84495#p84495 <![CDATA[Re: FA Networking - Centralized p2p.]]> Either we find that the peer that disconnected did not manage to send a last ack that was
needed to give proper input to the rest of the replay recorders (while at the same time
there is a workaround in place for the game state); and we just have to synthesize the one missing
ack, or else we can try and simulate a living peer that acks all command packages without
sending any of his own till the end of the game, just like a dead player would.

What we know so far is that a packet has a header that is a poor man's TCP. And that the
payload contains a number of independent items (presumably each with their own
type and length field). In some of the payload items we are going to find the same
data that is in replays, i.e. the command stream (or more precisely a command stream
fragment to be executed in a specific simulation tick).

Statistics: Posted by rootbeer23 — 28 Oct 2014, 06:18


]]>
2014-10-27T08:02:19+02:00 2014-10-27T08:02:19+02:00 /viewtopic.php?t=5429&p=84390#p84390 <![CDATA[Re: FA Networking - Centralized p2p.]]>
http://forums.gaspowered.com/viewtopic.php?f=10&t=15498

btw:
"Net_LogPackets" in console dumps packets into %TEMP%

Spoiler: show
William wrote:
Also, you report that the connection was typically 32kbps, corresponding to 1kB/peer/sec. Which tells us that most of the time traffic was succesfully getting through and were didn't end up saturating the pipe with reseonds. That is unless one connection was saturated and the others were good. Then 2k+3(.7k) would have averaged out to about 1kB/peer, so that doesn't actually tell us that the pipe to a single peer wasn't saturated.

The quality of a connection can be measured by three other things in addition to bandwidth: latency (how long does it take a packet to get from end to end), jitter (how much does the latency vary from packet to packet), and reliability (how likely is it that a packet sent will actually make it). Verizon is (now) giving you your 128k in bandwidth. And if you are uploading files at that rate, then TCP can successfully compensate for the other 3 characteristics. But given what you are describing, it looks like SupCom isn't.

In order to figure out what is going on, there are a couple of things you can try. First off, it is quite easy to measure/control bandwidth. You are already measuring it with that ren_ShowBandwidthUsage command. To control it, you can change the net_MaxSendRate variable to something other that 2048. In a 5 player game (4 peers) on a 128kbps connection, setting it to 4096 would keep you below the theoritical saturation point. If you set it to something huge, then that effectivly turns on the clamp and you can see the effect of saturating the connection.

Latency is also easy to measure. That's just the reported ping number minus the current setting of net_AckDelay (defaults to 25, in milliseconds).

Unfortunately, there aren't easy ways to get supcom to tell you about jitter and reliability. I just never got around to adding code to tally them up and display it in a nice graph.

I do have one idea for an experiment you and your friend could try if you are game, though:

Setup a 1v1 game w/ your friend with cheats enabled and the game speed set to adjustable. If it is 1v1 then all network is just the two of you and you don't have to worry about how much of the ren_ShowBandwidthUsage graph is the connection in question and how much is the good connections.

Once in game, set net_MaxSend to something absurdly huge so that it doesn't come into play.

Now fire up the ren_ShowNetworkBandwidth graph on both you and your friend's machine. Then compare the send bandwidth on the machine of the person spamming commands to the receive bandwidth on the other machine. They should match if everything is going well.

Now play with the game speed. Run it up to +10. The bandwidth used should increase drastically. Does significant packet loss start showing up someplace? What happens if you slow the game down?

If just speeding up the game doesn't stress the network enough, you can also create a 1000 or so units and feed 'em lots of commands. The easiest way to do this would be to use the Alt-F2 menu to create one unit. Then select it and hit Ctrl-Shift-C (copy) followed by Ctrl-Shift-V (paste) several times. Now select all those units, Ctrl-Shift-C again followed by Ctrl-Shift-V several more times. Repeat until you have a big pile of units. Select all those units and then spam the right mouse button on various places in the map. The network usage should spike while it is sending those move commands.

There are also lots of other console variables that control how and when supcom goes about sending packets:

net_SendDelay - how long (ms) to delay once some data is available before sending it out. If too short, we'll waste bandwidth sending lots of small packets. If too large, the percieved latency will be too high.

net_AckDelay - like SendDelay, but for acking incoming packets instead of for sending outgoing data.

net_ResendPingMultiplier and net_ResendDelayBias - controls how long we wait before resending a packet that hasen't yet been acked. The exact formula is "(ping*multiplier+bias)*2^try" where "try" is the number of times we've already tried to send this packet already (clamped to 3 so the resend to delay doesn't get too huge).

net_MinResendDelay and net_MaxResendDelay - more clamps on the resend time.

net_MaxSendRate - max bytes/sec to send to any one client.

net_MaxBacklog - We can actually briefly exceed the max send rate as long as the amount of data buffered is less than this. Defaults to 2k. This is it model the fact that the router/modem has a buffer in it so it can take a few packets quickly without dropping any of them, but if we are going to keep sending we need to back off to a rate it can sustain.

Another quite useful command is "wld_ClientDebugDump" which dumps out lots state about the network connections to the F9 log. If the eject dialog comes up and you don't know why, this can sometimes lend some insight on what internally has gone wrong with the connection.

Statistics: Posted by Crotalus — 27 Oct 2014, 08:02


]]>
2014-10-27T05:30:50+02:00 2014-10-27T05:30:50+02:00 /viewtopic.php?t=5429&p=84383#p84383 <![CDATA[Re: FA Networking - Centralized p2p.]]> william-with-weak-memory on the uber forums talks about ack-packets in the payload and it only makes
sense if they are there and the ack in the header refers to something else. And we would need to
tinker with the acks.

Statistics: Posted by rootbeer23 — 27 Oct 2014, 05:30


]]>
2014-10-27T05:27:20+02:00 2014-10-27T05:27:20+02:00 /viewtopic.php?t=5429&p=84382#p84382 <![CDATA[Re: FA Networking - Centralized p2p.]]>
Code:
net_AckDelay25 ms ,              Number of milliseconds to delay before sending ACKs
net_Lag Lag, 500 ms              Input command lag
net_MaxBacklog, 2048 bytes,      Maximum number of bytes to backlog to any one client
net_MaxResendDelay 1000 ms,      Maximum number of milliseconds to delay before resending a packet
net_MaxSendRate, 2048bytes,      Maximum number of bytes to send per second to any one client
net_MinResendDelay, 100 ms,      Minimum number of milliseconds to delay before resending a packet
net_ResendDelayBias, 25 ms,      The resend delay is ping*new_ResendPingMultiplier+net_ResendDelayBias
net_ResendPingMultiplier, 1.00,  The resend delay is ping*new_ResendPingMultiplier+net_ResendDelayBias
net_SendDelay, 25 ms,            Number of milliseconds to delay before sending Data

Statistics: Posted by Crotalus — 27 Oct 2014, 05:27


]]>
2014-10-27T04:40:42+02:00 2014-10-27T04:40:42+02:00 /viewtopic.php?t=5429&p=84379#p84379 <![CDATA[Re: FA Networking - Centralized p2p.]]>
It would be possible to do 2(n-1) if the sim were able to revert and issue commands back-in-time though, but don't think it is.

Statistics: Posted by Sheeo — 27 Oct 2014, 04:40


]]>
2014-10-27T04:22:17+02:00 2014-10-27T04:22:17+02:00 /viewtopic.php?t=5429&p=84378#p84378 <![CDATA[Re: FA Networking - Centralized p2p.]]> that a certain command packet is executed in a specific simtick. because the other requirement,
namely that all command packets that are executed in that simtick are executed in the
same order is easy to achieve: sort by issueing player id.
(there is a hint indicating this behaviour here: http://www.gamasutra.com/view/news/1260 ... esyncs.php)

for the current implementation i can see why in an 8 player game this is required to
ensure the above:

1) player 1 sends command packet (which includes the target future simtick) to 7 others
2) player 2-8 each send 7 ack packets to the peers
Final tally: (N - 1)^2

at which point all 8 players can be sure that the command packet is scheduled for execution
in simtick T.

with a server it looks as such:

1) player 1 sends command packet to server.
2) server send command packet to 7 peers
3) 7 peers send acks to the server
4) the server sends 8 acks to the peers when it has received all 7 outstanding acks.
each of these 8 acks is a placeholder for 6 acks.
(Because: each player knows that player1 knows the command and that itself
knows the command, so 6 acks remain)
For player1 the final ack packet is a placeholder for 7 outstanding acks.
Final tally: 1 + N - 1 + N - 1 + N == 3N - 1

the server cannot directly ack the command packet from peer 1 as it would give it the
permission to execute it, but it could not guarantee that it can distribute the command
packet to all other peers in time.

Statistics: Posted by rootbeer23 — 27 Oct 2014, 04:22


]]>
2014-10-27T03:09:31+02:00 2014-10-27T03:09:31+02:00 /viewtopic.php?t=5429&p=84374#p84374 <![CDATA[Re: FA Networking - Centralized p2p.]]>
This does mean we're talking about 2+8+8 total packets, though.

That is, the client doesn't need to know the server received acks from the other peers -- but of course the server does need to ensure that it does.

So, in formal terms we go from O(n^2) to O(n) packets.

Statistics: Posted by Sheeo — 27 Oct 2014, 03:09


]]>
2014-10-27T02:52:08+02:00 2014-10-27T02:52:08+02:00 /viewtopic.php?t=5429&p=84373#p84373 <![CDATA[Re: FA Networking - Centralized p2p.]]>
Sheeo wrote:
So, to go by your example -- player 1 sends a command packet to every peer, say 8. These 8 players need to send an ack-packet to all other peers, 8^2 packets.

With a centralised peer that controls the commandstream, the player sends the command packet to the central peer which sends it on to the 8 peers. Then every peer sends 1 ack packet back to the central peer, and we only have 8+8 packets sent in total.


how does player 1 know that the server received the ack from player2...player8?

Statistics: Posted by rootbeer23 — 27 Oct 2014, 02:52


]]>
2014-10-27T02:45:42+02:00 2014-10-27T02:45:42+02:00 /viewtopic.php?t=5429&p=84372#p84372 <![CDATA[Re: FA Networking - Centralized p2p.]]>
rootbeer23 wrote:
Ze_PilOt wrote:I won't explain how FA networking is working. It's peer to peer, nothing fancy.

But I will explain how Blizzard does it.
It's peer to peer with a centralized server.

Like FA, the sim is computed in each client. But instead of sending your commands to all others players, it send your command a single time to the server, and that server dispatch them to all other client.

Meaning that for a 4v4 game, in FA you need to send your commands to 7 players, in starcraft2 you only need to send it to one.
In both cases, you receive 7 commands.

Blizzard way of doing thing is solving the upload problem (download is rarely the limitation here).


When you send a command packet currently, player1 sends it to player2, ..., player8. 7 packets total.
If you do it with centralized p2p, player1 sends a packet to the server, the server sends it to 7 players. 8 packets total.
When player1's upload bandwidth is limited, this is beneficial. But in the general case i dont see how it would be better.
You are only sending the packet on a suboptimal route.


It's beneficial because of the p2p protocol.

What's crucial for FA to run in a networked situation is that there is a commandstream that all peers agree upon, this means that we require the same order of commands on all peers, before they're executed.

The way the protocol does this, is to send an acknowledgement upon receipt of a command packet. Since just a receipt isn't enough to agree upon the order. Just ack'ing every packet does not achieve a total order of the commandstream, so what is done is to send an ack-packet to every peer, that we received the given packet. This means that a command is issued only when all peers have said that they received the packet.

So, to go by your example -- player 1 sends a command packet to every peer, say 8. These 8 players need to send an ack-packet to all other peers, 8^2 packets.

With a centralised peer that controls the commandstream, the player sends the command packet to the central peer which sends it on to the 8 peers. Then every peer sends 1 ack packet back to the central peer, and we only have 8+8 packets sent in total.

This will reduce network lag in games with many peers, and potentially allow for games with a lot more players.

Statistics: Posted by Sheeo — 27 Oct 2014, 02:45


]]>
2014-10-27T02:28:57+02:00 2014-10-27T02:28:57+02:00 /viewtopic.php?t=5429&p=84371#p84371 <![CDATA[Re: FA Networking - Centralized p2p.]]>
Ze_PilOt wrote:
I won't explain how FA networking is working. It's peer to peer, nothing fancy.

But I will explain how Blizzard does it.
It's peer to peer with a centralized server.

Like FA, the sim is computed in each client. But instead of sending your commands to all others players, it send your command a single time to the server, and that server dispatch them to all other client.

Meaning that for a 4v4 game, in FA you need to send your commands to 7 players, in starcraft2 you only need to send it to one.
In both cases, you receive 7 commands.

Blizzard way of doing thing is solving the upload problem (download is rarely the limitation here).


When you send a command packet currently, player1 sends it to player2, ..., player8. 7 packets total.
If you do it with centralized p2p, player1 sends a packet to the server, the server sends it to 7 players. 8 packets total.
When player1's upload bandwidth is limited, this is beneficial. But in the general case i dont see how it would be better.
You are only sending the packet on a suboptimal route.

Statistics: Posted by rootbeer23 — 27 Oct 2014, 02:28


]]>
2014-05-23T16:31:46+02:00 2014-05-23T16:31:46+02:00 /viewtopic.php?t=5429&p=73633#p73633 <![CDATA[Re: FA Networking - Centralized p2p.]]> ping to where ever the current faf domain is, is 149ms? If it doesn't work for a proxy it will work as a mirror, it has 1Gb connection to the net...

Data Center: Phoenix, AZ
Processor
Dual Processor Quad Core E5620
Memory
32 GB DDR3 1333
Bandwidth
10,000 GB per Month
Port Speed
1000 Mbps Public

As a side note, I will create a VPS on this box for faf, I can't give the whole server.

Statistics: Posted by Cuddles — 23 May 2014, 16:31


]]>
2014-05-22T09:17:58+02:00 2014-05-22T09:17:58+02:00 /viewtopic.php?t=5429&p=73550#p73550 <![CDATA[Re: FA Networking - Centralized p2p.]]> Statistics: Posted by Cuddles — 22 May 2014, 09:17


]]>
2014-04-17T14:44:26+02:00 2014-04-17T14:44:26+02:00 /viewtopic.php?t=5429&p=71338#p71338 <![CDATA[Re: FA Networking - Centralized p2p.]]>
Sheeo wrote:
Before handling that, I don't think the current MP_CNT and MP_ANS intrepretations work. I get protocol as being 2, and timeout as being a rather large value of 871.391.910, does this make sense?


No, that doesn't look right... In retrospect, I may have got the basic data types wrong (I haven't done much network programming, but shouldn't sequence be wider than a short?), but I didn't get any sensible data otherwise so I went with.. (is the endianness right? it's set to native atm, but network packets should be in big endian...)

Sheeo wrote:
Also I don't see any meaningful strings in MP_CNT or MP_ANS packets. Do you have any idea about compression, and how it's used if turned on? (Compression byte in test data seems to be 1)


It's a simple deflate compression. if you hook up the test uimain.lua, that should disable it altogether.

Sheeo wrote:
That was a side effect of having quotechar='d' in there, as the datareader would tread the "d" in data as being a quote delimiter.


I expected that to be something like that...

Anyway, as you see, I got most of the types so far with guesswork.
I also grepped the exe for network logs and constants. There's probably some other stuff in there I missed.

Statistics: Posted by neruz — 17 Apr 2014, 14:44


]]>
2014-04-16T09:01:40+02:00 2014-04-16T09:01:40+02:00 /viewtopic.php?t=5429&p=71290#p71290 <![CDATA[Re: FA Networking - Centralized p2p.]]>
neruz wrote:
A little bit of update;

I didn't have much time to work on the packets, but I just used tshark to convert to csv from the pcap format

Code:
#tshark -n -r test.pcap -T fields -Eheader=y -Eseparator=, -e frame.number -e frame.len -e data > test.csv

    with open('test.csv', 'rU') as csvfile:
        datareader = csv.DictReader(csvfile, delimiter=',', quotechar='d')
        for row in datareader:
            data = binascii.unhexlify(row["packet"])
            print _PacketType(data)


I'll have to rewrite the packet handling code to handle subtypes in the data packet
(the DataReceived method in the lobby could help with decoding the structure)


Before handling that, I don't think the current MP_CNT and MP_ANS intrepretations work. I get protocol as being 2, and timeout as being a rather large value of 871.391.910, does this make sense?

Also I don't see any meaningful strings in MP_CNT or MP_ANS packets. Do you have any idea about compression, and how it's used if turned on? (Compression byte in test data seems to be 1)

neruz wrote:
* You'll have to rename the last "data" field in the generated file, python doesn't seem to like it...


That was a side effect of having quotechar='d' in there, as the datareader would tread the "d" in data as being a quote delimiter.

Using


datareader = csv.DictReader(csvfile, delimiter=',', quoting=csv.QUOTE_NONE)


is more appropriate, since no quotes will be in the file :)

Statistics: Posted by Sheeo — 16 Apr 2014, 09:01


]]>
2014-04-15T23:25:34+02:00 2014-04-15T23:25:34+02:00 /viewtopic.php?t=5429&p=71274#p71274 <![CDATA[Re: FA Networking - Centralized p2p.]]>
Ze_PilOt wrote:
I did the edit.


Thank you sir.

Anaryl wrote:
No the assumption is correct. Ounce of prevention worth a pound of cure - instead of retransmitting packets through several layers of complexity - FAF then a proxy, then a multicast - it's far simply to educate users on how to correctly configure their hardware. The proxy does function very well as a failsafe.


It wouldn't be several layers, at most one more than currently.

Anaryl wrote:
In any case, we're not talking about proxying for the sake of improving connectivity, but for the sake of improving overall throughput by eliminating the n^2 double-ack messages currently required by the protocol.


Didn't see this ... over the web ... I don't think you would get the same benefit as you would in a campus level network where this kind of protocol would see the most action and where bandwidth wasn't so much the issue.


You have this the wrong way around -- it matters over the web and not on a low latency network, exactly because latency is what makes the many packets flying extra bad.

Anaryl wrote:
As as far as I can tell from the protocol standards is that it works like a smart broadcast - well as a smart broadcast on single networks.


Care to elaborate? What protocol standards are you talking about?

I must stress that we're talking about application-level protocols here, whatever transport mechanism is used underneath is in principle not important. Obviously since what's used right now is UDP for the lobbyComm and TCP for the GPGNet connection, we're not just going to go ahead and change transport because of the assumptions that the application has when using those transports.

Theoretically we could unpack the entire protocol and use a different transport, but for now we're just talking about modifying packets to reduce the amount sent.

Anaryl wrote:
Crossing the web is far more complex - you'd be required to have every hop support multicasting for one, you'd need a 'virtual server' in a sense - to act as the caster essentially . In my experience communicating with clients inside a VPN hurts performance for FA - pretty much anything that reliably retransmits your packet is going to hurt - judging from the horrible effect the Great Firewall has on FA performance, any kind of packet filtering seems to hurt. If you are implementing it as an extra server it will hurt - and it's really just doing what the game does already...


Again, we're not talking transport level multicast (for several reasons), and this simply isn't supported on the web in the general case.


Ze_PilOt:

For now I've just been reading and testing the code Neruz posted along with his testdata--but I see a lot of code in the lobby related to this already.

I think you're right in that we need not go further than the header, unless somehow the data within MP_DATA packets is tailored to the reciever -- I do believe this would be weird behavior.

Statistics: Posted by Sheeo — 15 Apr 2014, 23:25


]]>