pub struct Client<C = BoxedConnector<RpcService>> { /* private fields */ }
rpc
only.Expand description
Iroh blobs client.
Implementations§
source§impl<C> Client<C>where
C: Connector<RpcService>,
impl<C> Client<C>where
C: Connector<RpcService>,
sourcepub fn new(rpc: RpcClient<RpcService, C>) -> Self
pub fn new(rpc: RpcClient<RpcService, C>) -> Self
Create a new client
Get a tags client.
sourcepub async fn status(&self, hash: Hash) -> Result<BlobStatus>
pub async fn status(&self, hash: Hash) -> Result<BlobStatus>
Check if a blob is completely stored on the node.
Note that this will return false for blobs that are partially stored on the node.
sourcepub async fn has(&self, hash: Hash) -> Result<bool>
pub async fn has(&self, hash: Hash) -> Result<bool>
Check if a blob is completely stored on the node.
This is just a convenience wrapper around status
that returns a boolean.
sourcepub async fn batch(&self) -> Result<Batch<C>>
pub async fn batch(&self) -> Result<Batch<C>>
Create a new batch for adding data.
A batch is a context in which temp tags are created and data is added to the node. Temp tags are automatically deleted when the batch is dropped, leading to the data being garbage collected unless a permanent tag is created for it.
sourcepub async fn read(&self, hash: Hash) -> Result<Reader>
pub async fn read(&self, hash: Hash) -> Result<Reader>
Stream the contents of a a single blob.
Returns a Reader
, which can report the size of the blob before reading it.
sourcepub async fn read_at(
&self,
hash: Hash,
offset: u64,
len: ReadAtLen
) -> Result<Reader>
pub async fn read_at( &self, hash: Hash, offset: u64, len: ReadAtLen ) -> Result<Reader>
Read offset + len from a single blob.
If len
is None
it will read the full blob.
sourcepub async fn read_to_bytes(&self, hash: Hash) -> Result<Bytes>
pub async fn read_to_bytes(&self, hash: Hash) -> Result<Bytes>
Read all bytes of single blob.
This allocates a buffer for the full blob. Use only if you know that the blob you’re
reading is small. If not sure, use Self::read
and check the size with
Reader::size
before calling Reader::read_to_bytes
.
sourcepub async fn read_at_to_bytes(
&self,
hash: Hash,
offset: u64,
len: ReadAtLen
) -> Result<Bytes>
pub async fn read_at_to_bytes( &self, hash: Hash, offset: u64, len: ReadAtLen ) -> Result<Bytes>
Read all bytes of single blob at offset
for length len
.
This allocates a buffer for the full length.
sourcepub async fn add_from_path(
&self,
path: PathBuf,
in_place: bool,
tag: SetTagOption,
wrap: WrapOption
) -> Result<AddProgress>
pub async fn add_from_path( &self, path: PathBuf, in_place: bool, tag: SetTagOption, wrap: WrapOption ) -> Result<AddProgress>
Import a blob from a filesystem path.
path
should be an absolute path valid for the file system on which
the node runs.
If in_place
is true, Iroh will assume that the data will not change and will share it in
place without copying to the Iroh data directory.
sourcepub async fn create_collection(
&self,
collection: Collection,
tag: SetTagOption,
tags_to_delete: Vec<Tag>
) -> Result<(Hash, Tag)>
pub async fn create_collection( &self, collection: Collection, tag: SetTagOption, tags_to_delete: Vec<Tag> ) -> Result<(Hash, Tag)>
Create a collection from already existing blobs.
For automatically clearing the tags for the passed in blobs you can set
tags_to_delete
to those tags, and they will be deleted once the collection is created.
sourcepub async fn add_reader(
&self,
reader: impl AsyncRead + Unpin + Send + 'static,
tag: SetTagOption
) -> Result<AddProgress>
pub async fn add_reader( &self, reader: impl AsyncRead + Unpin + Send + 'static, tag: SetTagOption ) -> Result<AddProgress>
Write a blob by passing an async reader.
sourcepub async fn add_stream(
&self,
input: impl Stream<Item = Result<Bytes>> + Send + Unpin + 'static,
tag: SetTagOption
) -> Result<AddProgress>
pub async fn add_stream( &self, input: impl Stream<Item = Result<Bytes>> + Send + Unpin + 'static, tag: SetTagOption ) -> Result<AddProgress>
Write a blob by passing a stream of bytes.
sourcepub async fn add_bytes(&self, bytes: impl Into<Bytes>) -> Result<AddOutcome>
pub async fn add_bytes(&self, bytes: impl Into<Bytes>) -> Result<AddOutcome>
Write a blob by passing bytes.
sourcepub async fn add_bytes_named(
&self,
bytes: impl Into<Bytes>,
name: impl Into<Tag>
) -> Result<AddOutcome>
pub async fn add_bytes_named( &self, bytes: impl Into<Bytes>, name: impl Into<Tag> ) -> Result<AddOutcome>
Write a blob by passing bytes, setting an explicit tag name.
sourcepub async fn validate(
&self,
repair: bool
) -> Result<impl Stream<Item = Result<ValidateProgress>>>
pub async fn validate( &self, repair: bool ) -> Result<impl Stream<Item = Result<ValidateProgress>>>
Validate hashes on the running node.
If repair
is true, repair the store by removing invalid data.
sourcepub async fn consistency_check(
&self,
repair: bool
) -> Result<impl Stream<Item = Result<ConsistencyCheckProgress>>>
pub async fn consistency_check( &self, repair: bool ) -> Result<impl Stream<Item = Result<ConsistencyCheckProgress>>>
Validate hashes on the running node.
If repair
is true, repair the store by removing invalid data.
sourcepub async fn download(
&self,
hash: Hash,
node: NodeAddr
) -> Result<DownloadProgress>
pub async fn download( &self, hash: Hash, node: NodeAddr ) -> Result<DownloadProgress>
Download a blob from another node and add it to the local database.
sourcepub async fn download_hash_seq(
&self,
hash: Hash,
node: NodeAddr
) -> Result<DownloadProgress>
pub async fn download_hash_seq( &self, hash: Hash, node: NodeAddr ) -> Result<DownloadProgress>
Download a hash sequence from another node and add it to the local database.
sourcepub async fn download_with_opts(
&self,
hash: Hash,
opts: DownloadOptions
) -> Result<DownloadProgress>
pub async fn download_with_opts( &self, hash: Hash, opts: DownloadOptions ) -> Result<DownloadProgress>
Download a blob, with additional options.
sourcepub async fn export(
&self,
hash: Hash,
destination: PathBuf,
format: ExportFormat,
mode: ExportMode
) -> Result<ExportProgress>
pub async fn export( &self, hash: Hash, destination: PathBuf, format: ExportFormat, mode: ExportMode ) -> Result<ExportProgress>
Export a blob from the internal blob store to a path on the node’s filesystem.
destination
should be an writeable, absolute path on the local node’s filesystem.
If format
is set to ExportFormat::Collection
, and the hash
refers to a collection,
all children of the collection will be exported. See ExportFormat
for details.
The mode
argument defines if the blob should be copied to the target location or moved out of
the internal store into the target location. See ExportMode
for details.
sourcepub async fn list(&self) -> Result<impl Stream<Item = Result<BlobInfo>>>
pub async fn list(&self) -> Result<impl Stream<Item = Result<BlobInfo>>>
List all complete blobs.
sourcepub async fn list_incomplete(
&self
) -> Result<impl Stream<Item = Result<IncompleteBlobInfo>>>
pub async fn list_incomplete( &self ) -> Result<impl Stream<Item = Result<IncompleteBlobInfo>>>
List all incomplete (partial) blobs.
sourcepub async fn get_collection(&self, hash: Hash) -> Result<Collection>
pub async fn get_collection(&self, hash: Hash) -> Result<Collection>
Read the content of a collection.
sourcepub fn list_collections(
&self
) -> Result<impl Stream<Item = Result<CollectionInfo>>>
pub fn list_collections( &self ) -> Result<impl Stream<Item = Result<CollectionInfo>>>
List all collections.
sourcepub async fn delete_blob(&self, hash: Hash) -> Result<()>
pub async fn delete_blob(&self, hash: Hash) -> Result<()>
Delete a blob.
Warning: this operation deletes the blob from the local store even if it is tagged. You should usually not do this manually, but rely on the node to remove data that is not tagged.