Skip to content
This repository has been archived by the owner on Apr 17, 2022. It is now read-only.

O(1) path-finding check #546

Closed
wzdev-ci opened this issue May 26, 2009 · 20 comments
Closed

O(1) path-finding check #546

wzdev-ci opened this issue May 26, 2009 · 20 comments

Comments

@wzdev-ci
Copy link
Contributor

keyword_path-finding_AI resolution_fixed type_patch (an actual patch, not a request for one) | by Per


The first patch adds an O(1) path-finding check function that tells us whether it is possible for a droid with a given propulsion type to go from position A to position B. It generates some static information on map load using a flood fill algorithm which takes some time (a bit under a second on my computer), so we may want to cache it on the disk, and later create it with a map editor.

The second patch uses this path check function to make sure we do not try to return to repair to a repair centre that we cannot reach. This is a random example picked to test the feature, there are many more issues like it.


Issue migrated from trac:546 at 2022-04-15 18:20:58 -0700

@wzdev-ci
Copy link
Contributor Author

Per uploaded file floodfill2.diff (7.5 KiB)

Flood fill algorithm check

@wzdev-ci
Copy link
Contributor Author

Per uploaded file rtrcheck.diff (3.5 KiB)

Check that we can go to a repair centre

@wzdev-ci
Copy link
Contributor Author

Zarel commented


Looks good to me. After we commit this and the wall patch (and backport both to 2.2, hopefully), we should profile Warzone again and see how performance looks.

@wzdev-ci
Copy link
Contributor Author

Buginator commented


Per, for the floodfill patch, correct me if I am wrong, but that needs to be done on map changes correct?

If so, then in the campaign, we need to add the mapFloodFillContinents() when we get those lovely limbo & expand missions... or else, the map, and the flooded continents(ha!) would not be in sync.

p.s, the baba's don't appreciate you flooding their lands! They got enough to worry about! ;)

@wzdev-ci
Copy link
Contributor Author

Per commented


I think limbo will work fine, since the limbo maps are loaded from disk, but good point about the expand missions. It would probably work out fine in practice, though - the flood algorithm ignores scroll limits, so it will only give some trouble if the scroll limit cuts up continents, and the result will still not be worse than what it already is. But I'll see if I can think up a solution.

@wzdev-ci
Copy link
Contributor Author

Kreuvf commented


so we may want to cache it on the disk, and later create it with a map editor.
I object. Not doing this is easier, faster, allows for faster map distribution and wastes no disk space.

  1. Doing this would mean trusting untrustworthy data (the precalculated information in the map file/disk cache). So we would have to check that data for validity. This adds complexity: If there is no algorithm for checking the validity which is faster than doing the actual calculation, checking for validity will take more time than calculating only.
  2. Even the data saved to disk cannot be trusted and therefore must be checked.
  3. Saving additional data in map-files will lead to increased map sizes. Larger map files in turn will need more time when distributing, especially when distributing via WZ's built-in capabilities.
  4. Parsing map files will become more complex when "flooding information" is included. This makes the corresponding code more error-prone.

Therefore I think that not saving any such data in a map file/disk cache and instead calculating it when needed is a superior way of doing it.

@wzdev-ci
Copy link
Contributor Author

wzdev-ci commented May 28, 2009

Zarel commented


Replying to Warzone2100/old-trac-import#546 (comment:4):

  1. Doing this would mean trusting untrustworthy data (the precalculated information in the map file/disk cache). So we would have to check that data for validity. This adds complexity: If there is no algorithm for checking the validity which is faster than doing the actual calculation, checking for validity will take more time than calculating only.

...um, why is that data untrustworthy? We trust all the other data in the map file; why not the precalculated data? Especially since the precalculated data saves millions of microseconds.

  1. Even the data saved to disk cannot be trusted and therefore must be checked.

...why not? At most, we do a checksum or something, which is much faster than several million microseconds.

  1. Saving additional data in map-files will lead to increased map sizes. Larger map files in turn will need more time when distributing, especially when distributing via WZ's built-in capabilities.

It's negligible. Plus, I'm sure the data computes much slower than than it transfers.

  1. Parsing map files will become more complex when "flooding information" is included. This makes the corresponding code more error-prone.

...oh, come on, that's really negligible. We do a lot more error-prone changes for a lot less speed benefit all the time.

@wzdev-ci
Copy link
Contributor Author

Per changed status from new to assigned

@wzdev-ci
Copy link
Contributor Author

Per changed owner from `` to Per

@wzdev-ci
Copy link
Contributor Author

Per commented


Some ideas: The continent data can be saved in the map directory, so no consistency check needed. The size will be four bytes for each map node, ie a maximum of 256kb uncompressed. I think I would use two PNG images to store this info, say normalcontinent.png and hovercontinent.png, so it would be both compressed and easily viewed. We could precalculate this data for all default maps.

@wzdev-ci
Copy link
Contributor Author

wzdev-ci commented May 28, 2009

Kreuvf commented


Replying to Warzone2100/old-trac-import#546 (comment:5):

Replying to Warzone2100/old-trac-import#546 (comment:4):

  1. Doing this would mean trusting untrustworthy data (the precalculated information in the map file/disk cache). So we would have to check that data for validity. This adds complexity: If there is no algorithm for checking the validity which is faster than doing the actual calculation, checking for validity will take more time than calculating only.

...um, why is that data untrustworthy? We trust all the other data in the map file; why not the precalculated data? Especially since the precalculated data saves millions of microseconds.
Trusting the other data is another topic and AFAIK there is at least some validity checking somewhere (why else would I get an error message for duplicate stuff?). Every data that comes from the outside is per se untrustworthy. How can we be sure that the map and all the data have been created by good users? And even if it was created by good users who guarantees that the data has not been changed by something different? And please stop counting microseconds, this does not change anything. If it's a microsecond wasted or an hour wasted, sooner or later it'll sum up.

By the way: There are more trivial reasons for not saving stuff to disk/map files and therefore not trusting what's in those files: We changed internal formats often enough in the past (save-games, anyone?) and as far as I understand it save-game format is a permanent headache. Something very similar could happen to map-files, if you misuse them to store unnecessary (=untrustworthy) data in them.

  1. Even the data saved to disk cannot be trusted and therefore must be checked.

...why not? At most, we do a checksum or something, which is much faster than several million microseconds.
See above for the reason. Actually, you'd have to give reasons for trustworthiness, as everything is untrustworthy by default. And if we did a checksum we would just compare checksums? Checksums of what? And why trust the checksum in that file ("outside data")?

  1. Saving additional data in map-files will lead to increased map sizes. Larger map files in turn will need more time when distributing, especially when distributing via WZ's built-in capabilities.

It's negligible. Plus, I'm sure the data computes much slower than than it transfers.
It's not negligible. (Back your claims or I'll just claim the opposite .)

  1. Parsing map files will become more complex when "flooding information" is included. This makes the corresponding code more error-prone.

...oh, come on, that's really negligible. We do a lot more error-prone changes for a lot less speed benefit all the time.
So at least you agree that it makes the corresponding code more error-prone. And the speed benefit? There is enough benefit in having the "flooding information" in the first place. If I understood everything correctly, it'll speed up path-finding.

My conclusion still is: Not only there is no need for storing that information permanently, doing so will give rise to further problems.

@wzdev-ci
Copy link
Contributor Author

wzdev-ci commented May 28, 2009

Kreuvf commented


Replying to Warzone2100/old-trac-import#546 (comment:6):

Some ideas: The continent data can be saved in the map directory, so no consistency check needed.
Why is no consistency check needed? Who/What guarantees that a map with the same name as before hasn't changed? Or the other way around: Who/What guarantees that the images have not been tampered with? Checksums of the map files could be used to verify that maps haven't changed, but then again you'd need to save the checksum somewhere. Somewhere nobody could have tampered with them.

Replying to Warzone2100/old-trac-import#546 (comment:6):

The size will be four bytes for each map node, ie a maximum of 256kb uncompressed.
On slow connections (8 kb/s upload) this will add 32 s to the total map distribution time. And with compression you'd have to do additional calculations for decompressing.

Replying to Warzone2100/old-trac-import#546 (comment:6):

I think I would use two PNG images to store this info, say normalcontinent.png and hovercontinent.png, so it would be both compressed and easily viewed. We could precalculate this data for all default maps.
How could the validity of the precalculated data be verified without rerunning the calculation on one's own machine? A checksum would probably be created from the precalculated data, but for comparison you'd need to have an own checksum which would require running the calculations on your own.

Maybe I am missing some important point(s) here, so I'd be glad if somebody stepped up to explain to me how this is supposed to work in a reliable manner.

@wzdev-ci
Copy link
Contributor Author

Per commented


I do not understand the concern for verification. Nobody is going to change maps stored on the disk. Even in savegames, the map files themselves do not change. A map editor that understands our map format would have to know about all the files in a map directory, of course, but such an editor does not yet exist.

@wzdev-ci
Copy link
Contributor Author

Per set resolution to fixed

@wzdev-ci
Copy link
Contributor Author

Per changed status from assigned to closed

@wzdev-ci
Copy link
Contributor Author

Per commented


(In [7578]) Add an O(1) path-finding check function that tells us whether it is possible
for a droid with a given propulsion type to go from position A to position B.
It generates some static information on map load using a flood fill algorithm
which takes some time.

Use this function to make sure we do not try to return to repair to a repair
centre that we cannot reach. Closes #546

@wzdev-ci
Copy link
Contributor Author

wzdev-ci commented May 28, 2009

Zarel commented


Replying to Warzone2100/old-trac-import#546 (comment:7):

Trusting the other data is another topic and AFAIK there is at least some validity checking somewhere (why else would I get an error message for duplicate stuff?). Every data that comes from the outside is per se untrustworthy. How can we be sure that the map and all the data have been created by good users? And even if it was created by good users who guarantees that the data has not been changed by something different? And please stop counting microseconds, this does not change anything. If it's a microsecond wasted or an hour wasted, sooner or later it'll sum up.

Sure, we can't be sure that they've been created by good users, but why do we need to be?

By the way: There are more trivial reasons for not saving stuff to disk/map files and therefore not trusting what's in those files: We changed internal formats often enough in the past (save-games, anyone?) and as far as I understand it save-game format is a permanent headache. Something very similar could happen to map-files, if you misuse them to store unnecessary (=untrustworthy) data in them.

Oh, not this. It's pretty easy to tell if a format is compatible. That can be as simple as adding a version number counter into the save game format. I mean, our map format isn't convoluted like our savegame format is; it'd be moderately simple just to add another file, and check for its presence.

See above for the reason. Actually, you'd have to give reasons for trustworthiness, as everything is untrustworthy by default. And if we did a checksum we would just compare checksums? Checksums of what? And why trust the checksum in that file ("outside data")?

Why do we need to trust our map data? You still haven't established this. I proposed a checksum as a safeguard against accidental corruption. If someone wants to intentionally fiddle with our map files, let them. We have an open-source philosophy - if someone wants to mess with stuff, great. I don't see the problem here.

So at least you agree that it makes the corresponding code more error-prone. And the speed benefit? There is enough benefit in having the "flooding information" in the first place. If I understood everything correctly, it'll speed up path-finding.

Yes, well, +2 seconds worth of map loading is pretty nontrivial. It should be avoided whenever possible.

I meant that it's error-prone in the tautological sense: All code is "error-prone". Even bugfixes. And yet we keep on writing it.

My conclusion still is: Not only there is no need for storing that information permanently, doing so will give rise to further problems.

What further problems? You've said nothing about any further problems except some vague commentary on trustworthiness.

Replying to Warzone2100/old-trac-import#546 (comment:8):

On slow connections (8 kb/s upload) this will add 32 s to the total map distribution time. And with compression you'd have to do additional calculations for decompressing.

Decompressing a PNG takes a trivial amount of time (far less than two seconds).

Maybe I am missing some important point(s) here, so I'd be glad if somebody stepped up to explain to me how this is supposed to work in a reliable manner.

I think what you're not understanding is this: We don't need to be secure against attackers. You must realize, we are developing a game here, not, say, OpenSSL. What kind of attack are you afraid of?

Here's a chart for you:

Attacker Data regenerated Data cached in map
Malicious mapper/MitM Warzone crashes Warzone crashes
Subtly malicious mapper/MitM Warzone behaves strangely Warzone behaves strangely
Trustable mapper Warzone works fine Warzone works fine

Malicious mappers can already crash the game, or if they want to be more subtle, screw with gateways, features on top of buildings, etc, etc. And as a community, we are small enough that we don't have people bored enough to do that. And even if we did, we'd get around it by not downloading their maps. That's how every game works.

Emphasis on "malicious" - a simply incompetent map maker would be fine, since it's the underlying map editor that takes care of the continent data cache.

@wzdev-ci
Copy link
Contributor Author

wzdev-ci commented May 29, 2009

Kreuvf commented


Replying to Warzone2100/old-trac-import#546 (comment:11):

I think what you're not understanding is this: We don't need to be secure against attackers. You must realize, we are developing a game here, not, say, OpenSSL.
Okay, thanks. I don't see any problems then. Thanks you for help.

Just in case: This is my honest opinion.

@wzdev-ci
Copy link
Contributor Author

wzdev-ci commented Jan 3, 2012

cybersphinx changed milestone from 3.0 to unspecified

@wzdev-ci
Copy link
Contributor Author

wzdev-ci commented Jan 3, 2012

cybersphinx commented


Milestone 3.0 deleted

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant