-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The timeout interval from the client SftpClient cannot be configured on the current server. #371
Comments
It's not entirely clear what this is about.
|
Hello, friend. I wrote all kinds of test cases, Here's just one example private static SftpClient sftpClient;
private static SftpFileSystem sftpFileSystem;
public static void main(String[] args) throws Exception {
Runnable runnable = new UploadFile();
Thread thread1 = new Thread(runnable);
thread1.start();
thread1.join();
Thread thread2 = new Thread(runnable);
thread2.start();
thread2.join();
Thread thread3 = new Thread(runnable);
thread3.start();
thread3.join();
}
private static synchronized SftpClient getSftpClient() throws IOException {
if (sftpClient != null) {
return sftpClient;
}
sftpClient = getSftpFileSystem().getClient();
return sftpClient;
}
private static synchronized SftpFileSystem getSftpFileSystem() throws IOException {
if (sftpFileSystem != null) {
return sftpFileSystem;
}
log.info("first init client");
SshClient client = SshClient.setUpDefaultClient();
client.start();
ClientSession session = client.connect("admin", "127.0.0.1", 9120).verify().getSession();
session.addPasswordIdentity("admin");
session.auth().verify();
session.setSessionHeartbeat(SessionHeartbeatController.HeartbeatType.IGNORE, Duration.ofMillis(5000));
sftpFileSystem = DefaultSftpClientFactory.INSTANCE.createSftpFileSystem(session);
return sftpFileSystem;
}
private static class UploadFile implements Runnable {
@Override
public void run() {
for (int i = 0; i < 3; i++) {
try (FileInputStream in = new FileInputStream(
"D:\\20230113\\202301131049" + i + ".dump")) {
SftpPath sftpPath = getSftpFileSystem().getPath(
"202301131049.dump" + Thread.currentThread().getId() + i + ".tmp");
Files.copy(in, sftpPath);
getSftpClient().rename("202301131049.dump" + Thread.currentThread().getId() + i + ".tmp",
"202301131049.dump" + Thread.currentThread().getId() + i + ".dump");
} catch (Exception e) {
e.printStackTrace();
}
}
}
} As in the example above, I just want to use the same SftpClient object in different threads. However, I observe the server through jstack. Each time a thread is started, the server generates a thread of the SftpSubsystem object and is in the waiting state. But, in fact, my first thread's upload task is already complete. # first time:
$ jstack.exe -l 56192 |grep sshd-Sftp
"sshd-SftpSubsystem-40668-thread-1" #41 daemon prio=5 os_prio=0 tid=0x00000000290e0000 nid=0xc67c runnable [0x0000000000b8e000]
# second time:
$ jstack.exe -l 56192 |grep sshd-Sftp
"sshd-SftpSubsystem-60480-thread-1" #42 daemon prio=5 os_prio=0 tid=0x00000000290e1800 nid=0x94c4 waiting on condition [0x000000002be5f000]
"sshd-SftpSubsystem-40668-thread-1" #41 daemon prio=5 os_prio=0 tid=0x00000000290e0000 nid=0xc67c waiting on condition [0x0000000000b8f000]
#third time:
$ jstack.exe -l 56192 |grep sshd-Sftp
"sshd-SftpSubsystem-57236-thread-1" #43 daemon prio=5 os_prio=0 tid=0x00000000290e4000 nid=0xa48c waiting on condition [0x000000002bf5f000]
"sshd-SftpSubsystem-60480-thread-1" #42 daemon prio=5 os_prio=0 tid=0x00000000290e1800 nid=0x94c4 waiting on condition [0x000000002be5f000]
"sshd-SftpSubsystem-40668-thread-1" #41 daemon prio=5 os_prio=0 tid=0x00000000290e0000 nid=0xc67c waiting on condition [0x0000000000b8f000] and the detail jstack of thread sshd-SftpSubsystem-57236-thread-1 like this: "sshd-SftpSubsystem-57236-thread-1" #31 daemon prio=5 os_prio=0 tid=0x000000002a046000 nid=0xe130 waiting on condition [0x000000000096f000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000005c2b66680> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2045)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.sshd.sftp.server.SftpSubsystem.run(SftpSubsystem.java:297)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- <0x00000005c2b66898> (a java.util.concurrent.ThreadPoolExecutor$Worker) I tried to look at the source code for this SftpSubsystem, and I found that it kept fetching values from the blocking queue. try {
ChannelSession channel = getServerChannelSession();
LocalWindow localWindow = channel.getLocalWindow();
while (true) {
Buffer buffer = requests.take();
if (buffer == CLOSE) {
break;
}
buffersCount++;
process(buffer);
localWindow.check();
}
} catch (Throwable t) { So, I don't know how to close these objects, or if there's a timeout configuration that allows the object to be destroyed. |
Maybe it would help if you closed the various client-side objects you use. An But yes, there is a problem here. The A per-channel timeout on the server side could mitigate this (and other cases where clients don't properly close their |
The previous implementation always put unused SftpClients back into the pool, but SftpClients in the pool were never closed (unless the whole SSH session was closed). Let this pool work more like a Java thread pool: besides a maximum size, give it a minimum "core" size, and a maximum life time for idle channels in the pool, and remove them from the pool and close them when they expire. By default, the maximum pool size is 8, the core size 1, and the idle life time 10 seconds. Also drain the pool when the file system closes, and close all channels. Remove the ThreadLocal. This mechanism was questionable anyway; it was the source of multiple earlier bug reports and there are some scenarios that it just cannot handle correctly. This change will mean that an application using more threads on an SftpFileSystem instance than the pool size may see poor SFTP performance on the extra threads. In this case, the pool size should be increased, or the application redesigned. Add some technical documentation on all this. Ensure in SftpFileSystem.readDir(String) that the SftpClient on the SftpIterableDirEntry is the wrapper. We don't want to close the underlying pooled channel. Ditto for InputStream and OutputStream returned from read or write: those also must use the wrapper to react properly when the wrapper is closed. Ensure the behavior of an SftpDirectoryStream is correct when the stream is closed. According to the javadoc on DirectoryStream, the iterator may continue to produce already cached entries, but then may exhaust early. Bug: apache#371
The previous implementation always put unused SftpClients back into the pool, but SftpClients in the pool were never closed (unless the whole SSH session was closed). Let this pool work more like a Java thread pool: besides a maximum size, give it a minimum "core" size, and a maximum life time for idle channels in the pool, and remove them from the pool and close them when they expire. By default, the maximum pool size is 8, the core size 1, and the idle life time 10 seconds. Also drain the pool when the file system closes, and close all channels. Remove the ThreadLocal. This mechanism was questionable anyway; it was the source of multiple earlier bug reports and there are some scenarios that it just cannot handle correctly. This change will mean that an application using more threads on an SftpFileSystem instance than the pool size may see poor SFTP performance on the extra threads. In this case, the pool size should be increased, or the application redesigned. Add some technical documentation on all this. Ensure in SftpFileSystem.readDir(String) that the SftpClient on the SftpIterableDirEntry is the wrapper. We don't want to close the underlying pooled channel. Ditto for InputStream and OutputStream returned from read or write: those also must use the wrapper to react properly when the wrapper is closed. Ensure the behavior of an SftpDirectoryStream is correct when the stream is closed. According to the javadoc on DirectoryStream, the iterator may continue to produce already cached entries, but then may exhaust early. Bug: apache#371
Description
I wrote a simple demo
The Server code:
The Client code:
Motivation
I refer to hackers or other clients and deliberately do not close the connection.
The following code is used to run the sshd client,I didn't turn it off, and I tried 1,000 times.
So, When I went to trace the stack on the server side, I found that many of the SftpSubsystem objects were not closed.
Alternatives considered
I tried to look for timeout-related configurations in SftpModuleProperties and CoreModuleProperties, but I didn't find any. So, I'd like to ask, do we currently support this capability?
Thank you very much
Additional context
No response
The text was updated successfully, but these errors were encountered: