Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable or disable stacking based on TPS hysteresis #99

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

quiquelhappy
Copy link

I tested using Stress, using /stress test=entity amount=5000 duration=120 range=10

This is a pretty basic PR, I didn't want to add too many options so I wouldn't over-complicate it. Right now the TPS check runs every 30 seconds using runTaskAsynchronously from the StackManager class. Maybe it would be nice to add the option so the end user can also check the frequency. For now I guess it gets the job done.

The feature is disabled by default, kicking in at 16TPS, and disabling measures after 18TPS is reached.

I added the disabled status using isEntityStackingTemporarilyDisabled, so I didn't have to modify every single class.

@quiquelhappy
Copy link
Author

any chance this could be reviewed?

Copy link
Member

@Esophose Esophose left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic here is mostly sound, but there is the problem that Server#getTPS is a method only available on Paper servers. NMSHandler functionality will need to be added to support Spigot for this feature.


this.stackingBasedOnPerformance = ConfigurationManager.Setting.PERFORMANCE_TPS_TOGGLE.getBoolean();
if(this.stackingBasedOnPerformance){
Bukkit.getServer().getScheduler().runTaskAsynchronously(rosePlugin, new Runnable() {
Copy link
Member

@Esophose Esophose Mar 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is creating this done in an async task?

@Override
public void run() {
if(ConfigurationManager.Setting.PERFORMANCE_TPS_TOGGLE.getBoolean()){
Bukkit.getScheduler().runTaskTimerAsynchronously(rosePlugin, () -> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to assign async repeating tasks to a variable and ensure they are cancelled when the plugin shuts down. See how the autosave task is handled in this class.

Additionally, as it is now, since this never gets cancelled and gets run every time /rs reload gets run, you can end up with multiple of these tasks running at once, that's bad.

Bukkit.getServer().getScheduler().runTaskAsynchronously(rosePlugin, new Runnable() {
@Override
public void run() {
if(ConfigurationManager.Setting.PERFORMANCE_TPS_TOGGLE.getBoolean()){
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why this is being checked again? You checked it above and had a variable assigned to it, but aren't using that variable here.

public void run() {
if(ConfigurationManager.Setting.PERFORMANCE_TPS_TOGGLE.getBoolean()){
Bukkit.getScheduler().runTaskTimerAsynchronously(rosePlugin, () -> {
double[] tps = Bukkit.getServer().getTPS();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Server#getTPS() is a Paper-API method and this will throw an error on Spigot servers. I believe it's only possible to check the server TPS using NMS on Spigot servers, so that functionality will need to be added to all the NMSHandler interface and implemented for each handler.

@quiquelhappy
Copy link
Author

I currently dont have the time to make the changes mentioned, but if someone can look into this, it would make for a very good feature <3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants