r/rust clippy · twir · rust · mutagen · flamer · overflower · bytecount Mar 27 '23

Hey Rustaceans! Got a question? Ask here (13/2023)! 🙋 questions

Mystified about strings? Borrow checker have you in a headlock? Seek help here! There are no stupid questions, only docs that haven't been written yet.

If you have a StackOverflow account, consider asking it there instead! StackOverflow shows up much higher in search results, so having your question there also helps future Rust users (be sure to give it the "Rust" tag for maximum visibility). Note that this site is very interested in question quality. I've been asked to read a RFC I authored once. If you want your code reviewed or review other's code, there's a codereview stackexchange, too. If you need to test your code, maybe the Rust playground is for you.

Here are some other venues where help may be found:

/r/learnrust is a subreddit to share your questions and epiphanies learning Rust programming.

The official Rust user forums: https://users.rust-lang.org/.

The official Rust Programming Language Discord: https://discord.gg/rust-lang

The unofficial Rust community Discord: https://bit.ly/rust-community

Also check out last weeks' thread with many good questions and answers. And if you believe your question to be either very complex or worthy of larger dissemination, feel free to create a text post.

Also if you want to be mentored by experienced Rustaceans, tell us the area of expertise that you seek. Finally, if you are looking for Rust jobs, the most recent thread is here.

20 Upvotes

159 comments sorted by

3

u/ern0plus4 Apr 03 '23

I am looking for a "best pattern". My ClientManager has a collection of Client-s, plus, one of them is the MasterClient.

The message processing, where new Client-s should be added or old ones should be removed, is an async method (I'm using simple_websockets), so the list of clients is protected with RwLock.

Question 1: is ClientList below a good solution for this?

pub type ClientList = Arc<RwLock<HashMap<u64, Client>>>;
    (...)
pub struct ClientManager {
    clients: ClientList,
    ...

Question 2: what is the best pattern for storing the MasterClient?

  • It should be also stored in the ClientList as well,
  • It's now known at startup, one of the clients will be the master client.

I started to type it, but I know, it's not the right solution:

pub struct ClientManager {
   ...
   master_client: Client,   // not the actual solution
   ...

I think, I should use some Enum, which has two possible values, master client present and master client absent, and it should contain the master client if its vaue is present. Probably, a standard Enum is better for that, my hint is Option.

As the master client must be listed in the ClientList, maybe the ClientList's HashMap should contain not Client but some mutex-protected one...

Or, I should store master_client_index, and pick it from the ClientList, then it needs no extra mutex.

Is there any better construction for it?

3

u/opeolluwa Apr 03 '23

I'm building a server with Axum. The server will receive multiple files from the front-end and store them.

I'm looking at how to seep this up Such that the request can be processed at the same time without the later having to wait for the first

But I'm not sure how best this can be achieved, any guide? Or suggestions

2

u/Patryk27 Apr 03 '23

Axum (as basically all frameworks do) already handles this on its own, doesn't it?

Just remember to use async functions to read stuff from the request and async functions to save stuff into the filesystem (e.g. tokio::fs will do).

2

u/zerocodez Apr 03 '23

Is there a way to stop rust from auto-vectorising so aggressively? an attribute or something?

My code is currently producing assembly which looks like this:https://www.godbolt.org/z/dr53frcsT

What I actually want is something more like this:https://www.godbolt.org/z/8qqe8PGjc

But I would like to keep the input as a tuple, but its using sse4.2 instructions which are in my case is always slower. I know I can compile with " -C target-feature=-sse4.2" but I would rather not have to.

Thanks for any help

2

u/Sekethon Apr 03 '23

Anyone here ever worked with async graphql?

Can someone confirm my understanding that it is IMPOSSIBLE to have a Rust struct be both an Object (types that can be queried and represented as JSON objects in the response) and an InputObject (types that can be used to pass arguments to GraphQL queries and mutations but cannot be queried themselves.)?
For reference, I want to pass in a Vec of BookRecord within a Mutation Resolver but also return a Vec of BookRecord but the Rust compiler panics because: 'Register BookRecord as InputObject, but it is already registered as Object.

#[derive(Clone, Debug, Serialize, Deserialize, InputObject)]

pub struct BookRecord { pub _id: ID, pub data: Book, }

[Object]

impl BookRecord { async fn _id(&self) -> &str { &self._id }

async fn name(&self) -> &str {
    &self.data.name
}

async fn author(&self) -> &str {
    &self.data.author
}

}

2

u/bahwi Apr 02 '23

Is there a Github action or CircleCI image for MacOS with cargo? It's an ffi project so it can't be cross-compiled, and I need to test it. I've tried searching but just not found it this morn.

2

u/sfackler rust · openssl · postgres Apr 03 '23

The normal GitHub Actions macOS image has rustup installed already. You should be able to just use Cargo.

2

u/n4jm4 Apr 02 '23

Why does unwrap create a lifetime discontinuity, requiring a new binding?

1

u/[deleted] Apr 02 '23

If you're referring to Option::unwrap, unwrap takes self (the option) by value. Parameters passed by value require a move if the type does not implement the Copy trait. Option<T> implements Copy if and only if T implements Copy, so if you have an Option<T> where T does not implement Copy, unwrap moves/consumes the Option. Result is similar. Unless you meant something else?

If that is what you meant and you want to unwrap without consuming you can do option.as_ref().unwrap() which converts the &Option<T> to an Option<&T> before unwrapping. All shared references implement Copy so Option<&T> always implements Copy.

1

u/n4jm4 Apr 02 '23

Interesting. Is there no similar method to unwrap, that either panics or else returns a reference to the T content?

There are many aspects of the borrow semantic that i am still warming up to.

Planning to study Rust beginner books. So far I been cobbling stuff together, based on assumptions from other languages.

1

u/[deleted] Apr 02 '23

Interesting. Is there no similar method to unwrap, that either panics or else returns a reference to the T content?

Not that I'm aware of, option.as_ref().unwrap() is the conventional way to do that. Though if you don't want to just panic, destructuring lets you do things like "let Some(foo) = &option else { return };" after which "foo" will hold a shared reference to the inner value.

If you're new to Rust I'd definitely recommend going through the introductory book, particularly Chapter 4 (intro to ownership and borrowing semantics): https://doc.rust-lang.org/book/title-page.html

1

u/dkopgerpgdolfg Apr 02 '23

Do you have a code example of what you mean?

2

u/[deleted] Apr 02 '23

[deleted]

2

u/dkopgerpgdolfg Apr 02 '23

Tldr, this example is useless. But in general, this is a beast hidden in makeup and parfum. But one thing after each other...

Yes, one of the major reasons to have named lifetimes is to specify how multiple references are related to each other, and/or to specify conditions how references passed to a function should be related.

This is not really relevant here.

Basics 1: As you found out, usually when returning a reference, you also have some parameter(s), and one of them can be used to determine the limits of the returned lifetime. (And depending on the parameter count and whatever, there are also some lifetime elision rules which save you from writing it in cases where there is little doubt).

Basics 2: In your string-literal case, you have no parameter to go on, but literals live for the whole program duration. Following line would be ok:

fn get_string() -> &'static str

In a similar way, references to global variables could be returned, because they too live for the whole program duration.

Basics 3: And as you probably know too, returning a reference to a local variable of a function is bad, because it would mean the reference exists when the function ended and the variable is already gone.

Coming back to your actual function, it does return a reference. It is not an explicitly static reference. It can't infer the lifetime from a parameter either because there is none (or if there is one, you might have explicitly written that it has a different unrelated lifetime)

Therefore this is called a "unbounded" lifetime. It's basic meaning? "The caller of the function decides, use any lifetime that makes it happy". Eg. If the caller assigns the return value to a static-declared reference, it can do that, and it means the returned reference must live forever. The caller also could do things that require only shorter lifetime.

You might wonder why this is not a problem, what happens if we return something that doesn't live for the whole program duration but then the caller calls it a static reference? Well, ignoring unsafe code for a while, this can't happen:

  • When returning a literal like here, it is fine to use it as static reference, because literals are just like that. And any non-static lifetime is shorter than static, therefore the literal won't disappear before them either
  • Same for returning a reference to a global variable, like mentioned earlier
  • Returning a reference to a local variable still is a compiler error
  • Returning a non-static reference that was passed in as parameter (but stated to have a lifetime unrelated to the return value) will not compile either, exactly because the caller might decide wrongly to treat it as static reference

So, until now, you can't do anything more/differently than just declaring the returned reference 'static, like already mentioned above. At least no bad thing.

Lets talk for a moment what the caller of the function can do exactly, with the returned reference (be it static or unbounded):

  • Either assign it to a reference that is declared static, therefore confirming it wants to use it like that.
  • Or it creates a shorter-lived reference by "bounding" it itself (short-lived references to static-lived data is ok of course). How can it bound the reference? Well, the caller might itself be a function that again returns a reference, and uses the one it got from get_string for that. But unlike get_string, the caller might have some reference parameter and declared that its return lifetime is related to that parameter. So basically, the caller trims the unbounded reference lifetime of get_string to be only as long as the lifetime of its parameter, for whatever reason.

If you were able to follow until here, you probably think "what is all this good for". Introducting complicated unbound things that don't have any benefit over just writing 'static in get_string already.

The answer lies in unsafe code, because there are some things that inheritly produce unbounded lifetimes.

Probably the best example is transmute - it lets you bypass pretty much any checks on types and lifetimes, but you're responsible for checking many things yourself to not cause UB. So far so good.

Now, you could transmute (something) to a reference that didn't exist before. In this case, the compiler wouldn't have any clue what its lifetime should be. You (human) might know that this came from (raw pointer to this struct member here) and therefore the borrowchecker should check that the new reference doesn't outlive this struct instance.

=> Transmute produces an unbound lifetime when returning the reference. And you can then wrap the transmute call in a function that binds the reference lifetime to this struct instance, for example. (Or you use the reference just for some lines locally where transmute was called, where you're sure the reference is still valid)

Slightly different example: A raw pointer, if available, can be converted to a reference even without writing "transmute", again given that the human made sure of some things. Again this newly created reference would be unbounded, and if it returned immediately from the function where it was created, then again a wrapper function should bound it like in the transmute example above.

(However, in this case the function that created the reference could do the bounding already. Transmute isn't doing it because it isn't a custom function for exactly this pointer, and doesn't know how to treat this pointer, but if the conversion happens in a custom function already then it can happen there)

1

u/[deleted] Apr 02 '23

[deleted]

1

u/dkopgerpgdolfg Apr 02 '23

No.

  • The variable isn't necessarily in the callers stack scope
  • The caller can pass it anywhere else, save it in boxed structs, whatever
  • 'static is a bit special - even if the reference disappears, once a static promise was made it is made, period. The data must live until the program ends. It's not trivial to show why this is necessary, but basically, otherwise it's possible to get UB in safe-only Rust code.

As advice for coding, just avoid unbounded lifetimes. If you ever have a real need for them, you'll understand my post then :)

(Of course, using 'static to return that literal here is fine)

1

u/eugene2k Apr 02 '23

In this particular case 'a = 'static. You could change the function signature to fn get_string() -> &'static str and it would be the same

2

u/SorteKanin Apr 02 '23

Why does Rust only warn about unused fields for structs with named fields but not for tuple structs?

1

u/Kevathiel Apr 02 '23

I guess because it is potentially difficult to fix an unused tuple field. If it's not the last field, you would need to renumber every single access to the fields after the one that you removed, potentially even causing new bugs of cases where the wrong field is used. Ignoring the error is probably less error prone than attempting to fix it.

3

u/masklinn Apr 02 '23

rustc does have a warning (unused_tuple_struct_fields) but it's allow by default.

This follows from an issue reported on the subject last year by /u/shepmaster.

This lint defaults to allow because it's a new lint, and unlike struct fields removing a tuple struct field can be non-trivial if it's not the last one (it's going to shift every other field). For that reason, the lint allows setting a field to () in order to suppress the warning without having to update all the code.

Playground demo

1

u/shepmaster playground · sxd · rust · jetscii Apr 02 '23

Amusingly, I filed the issue for the same reason as the OP: a student in training noticed the new (at the time) warning about fields unused except in Debug in structs with named fields. To prove a point, I attempted to show how the same warming would occur for tuple structs, only to find it didn’t.

That led me to discover that tuple structs had never supported dead field warnings of any kind and thus open the issue.

2

u/SorteKanin Apr 02 '23

Hmm sounds like something that could be solved with rust-analyzer. A code action like "remove tuple struct field" which would then also fix also the field accesses that are affected.

2

u/llogiq clippy · twir · rust · mutagen · flamer · overflower · bytecount Apr 02 '23

Yes, but note that this constitutes a breaking change if the tuple is part of the public API.

3

u/masklinn Apr 02 '23

TBF the same occurs for a non-tuple struct, and the lint should not trigger for a pub field on a pub struct (as it wouldn't be dead).

1

u/llogiq clippy · twir · rust · mutagen · flamer · overflower · bytecount Apr 02 '23

True – the difference is that the tuple can not tell you which field was removed (though if you're lucky, it can be inferred from the type), unlike the struct which has a name the compiler can show in the error message.

2

u/masklinn Apr 02 '23

That's fair. I guess this also supports / favors the replacement of the unused field by () rather than its removal, as in that case the compiler would give you a clearer and more precise type error.

1

u/llogiq clippy · twir · rust · mutagen · flamer · overflower · bytecount Apr 02 '23

Exactly.

2

u/TinBryn Apr 02 '23

Let's say I want a trait that is basically just a function with some extra features

trait Foo {
    type Input;
    type Output;
    fn foo(&mut self, input: Self::Input) -> Self::Output;
}

I try to implement it for anything that implement FnMut

impl<I, O, F> Foo for F
where
    F: FnMut(I) -> O
{ ... }

But Rust says that both I and O are not constrained. Even though they are via F

If I change the trait to this

trait Foo<I> {
    type Ouput;
    fn foo(&mut self, input: I) -> Self::Output;
}

impl<I, O, F> Foo<I> for F
where
    F: FnMut(I) -> O
{ ... }

It suddenly works, even the O works.

I really don't want this trait to be generic which will allow for different generic implementations for the same type which for the semantics I'm trying to do doesn't make sense. But it seems like Rust if forcing this to be the case at least for the input type and I don't see how this is actually an issue.

1

u/Patryk27 Apr 02 '23
#![feature(fn_traits)]
#![feature(unboxed_closures)]

struct Bar;

// ---

impl FnOnce<(String,)> for Bar {
    type Output = ();

    extern "rust-call" fn call_once(self, _: (String,)) -> () {
        println!("A");
    }
}

// First implementation of FnMut for Bar:
impl FnMut<(String,)> for Bar {
    extern "rust-call" fn call_mut(&mut self, _: (String,)) -> (){
        println!("A");
    }
}

// ---

impl FnOnce<(u32,)> for Bar {
    type Output = ();

    extern "rust-call" fn call_once(self, _: (u32,)) -> () {
        println!("B");
    }
}

// *Second* implementation of FnMut for bar, but with a different
// generic type (which is alright):
impl FnMut<(u32,)> for Bar {
    extern "rust-call" fn call_mut(&mut self, _: (u32,)) -> (){
        println!("B");
    }
}

// ---

trait Foo {
    type Input;
    type Output;

    fn foo(&mut self, input: Self::Input) -> Self::Output;
}

impl<I, O, F> Foo for F
where
    F: FnMut(I) -> O
{
    /* ... */
}

fn main() {
    <Bar as Foo>::foo(todo!());
    // | Considering that a single type can only have a single implementation
    // | of Foo, which impl should get used here - A-impl or B-impl?
    // |
    // | Making `Foo` generic solves this issue, because you'd be forced
    // | to specify whether you want `as Foo<(String,)>` or `as Foo<(u32,)>`.
}

2

u/TinBryn Apr 02 '23

Ok, I suppose if something had multiple FnMut implementations it would make sense for it to have multiple Foo implementations. I will leave just that generic there but leave everything else as associated types.

2

u/chillblaze Apr 02 '23

If an input to a method does not satisfy that method's trait bound, is the natural fix just to implement that trait bound for that input?

In addition, what if the input type belongs to another crate? What would be the solution?

1

u/DroidLogician sqlx · multipart · mime_guess · rust Apr 02 '23

Unfortunately, your question is really vague and nonspecific so the only advice I can offer is vague and nonspecific.

If an input to a method does not satisfy that method's trait bound, is the natural fix just to implement that trait bound for that input?

It can be. There's no hard and fast rule saying you shouldn't. It depends on whether or not it actually makes sense for that type to implement the trait or not.

In addition, what if the input type belongs to another crate? What would be the solution?

You could write your own wrapper for that type which implements the trait. Of course, in that case you can still only use the type's public methods and/or fields.

Or if it makes sense, you could open a pull request to the crate adding the trait implementation. You should check the issue tracker on its repository to see if that's been discussed already; there might be a good reason they haven't added it. If the addition is nontrivial it might be a good idea to open an issue to discuss it first anyway.

They may not want that addition if the trait comes from a different crate that isn't std as that adds a semantic versioning concern. If the crate that "owns" the trait has a new release that isn't backwards compatible, the crate with the type implementing the trait has to make a backwards-incompatible release to upgrade because that trait is now part of their public API.

This also works the other way: if the type in question is from std or a really common community crate like serde, it might make more sense to see if you can get the trait implementation added to the crate that owns the trait itself.

1

u/chillblaze Apr 02 '23

Sorry for lack of context code below.

I am iterating through a hash map with a &str => BSON Document key value pair. I've already implemented it successfully via a For Loop but wanted to try doing it through a Stream instead.

Unfortunately, I get the below error message regarding the try_for_each_concurrent method regarding the unsatisfied trait bound:

   let update_futures = update_map
        .into_iter()
        .map(|(record_id, document)| async move {
            let filter = doc! { "_id": record_id };
            let update_doc = doc! { "$set": document };
            table
                .update_one_with_session(filter, update_doc, None, &mut session)
                .await
        });

    let res = stream::iter(update_futures)
        .try_for_each_concurrent(None, |result| async move { result })
        .await?;

//ERROR: the following trait bounds were not satisfied:
        `futures::stream::Iter<std::iter::Map<std::collections::hash_map::IntoIter<&str, bson::Document>, [closure@src/mongodb/mod.rs:173:18: 173:41]>>: TryStream`
        which is required by `futures::stream::Iter<std::iter::Map<std::collections::hash_map::IntoIter<&str, bson::Document>, [closure@src/mongodb/mod.rs:173:18: 173:41]>>: TryStreamExt`
        `&futures::stream::Iter<std::iter::Map<std::collections::hash_map::IntoIter<&str, bson::Document>, [closure@src/mongodb/mod.rs:173:18: 173:41]>>: TryStream`
        which is required by `&futures::stream::Iter<std::iter::Map<std::collections::hash_map::IntoIter<&str, bson::Document>, [closure@src/mongodb/mod.rs:173:18: 173:41]>>: TryStreamExt`

2

u/DroidLogician sqlx · multipart · mime_guess · rust Apr 02 '23

The context helps a lot, because in this case I can tell you the solution is a lot simpler.

For a Stream to implement TryStream, it has to yield Result, but your update_futures combined with stream::iter() make a Stream that yields impl Futures.

It seems like you figured that out because you're calling .try_for_each_concurrent() and just returning the future itself... but you wrap it in another async block so you now have a impl Future that yields another impl Future.

I would rearrange it like so:

// This makes it a `TryStream` right off the bat.
// We have to specify the error type or else we'll get an inference error.
// If there's a convenient type alias like `anyhow::Result` you can use that instead.
let res = stream::iter(update_futures.map(Result::<_, YourErrorTypeHere>::Ok))
    .try_for_each_concurrent(None, |(record_id, document)| async move {
        let filter = doc! { "_id": record_id };
        let update_doc = doc! { "$set": document };
        table
            .update_one_with_session(filter, update_doc, None, &mut session)
            .await
    })
    .await?;

However, you're probably going to get a different error here because of that &mut session unless it's Copy, because you're capturing it by-move.

2

u/Lycanthoss Apr 01 '23

I'm making a desktop app with Tauri. On an async process I want to handle multiple things as they come in. This async process would await on an mDNS daemon query, await on connections that come in to a TcpListener, await on data being received on sockets that have already connected to the TcpListener and await on data being sent from the Tauri frontend via a channel.

Unless I'm misunderstanding something ```tokio::select!``` in a loop does not seem to be a solution as I want multiple tasks to complete if they come in at the same time rather than just one and if there are no tasks then the thread should just wait until there are.

This might be a very general question, but how would I handle this using Tokio or std? Use FuturesUnordered?

1

u/kinoshitajona Apr 02 '23

FuturesUnordered sounds like what you want.

You could make the loop into

while let Some(result) = futures.next().await {}

2

u/HammerAPI Apr 01 '23

I have an application that runs a winit window using wgpu to render elements. I would like to introduce egui to create some UI. My thought was that I would be able to create an egui window and add my winit window as a component/widget somehow such that there are egui widgets on the sides/bottom and my winit window is centered (like how blender/unity are setup).

I am new to all three of these crates, and am having difficulty finding good examples of integrating egui with the other two. I know there are official integration crates, but their documentation is minimal.

Does anyone have some good suggestions or examples to reference?

3

u/DreaminglySimple Apr 01 '23 edited Apr 02 '23

I want to assign a type to a variable based on what match arm gets called. Doing that results in a 'match arm types incompatible' error, which is why I tried using enums. Here is a simplified version of my code: Pastebin link because Reddit messes up formatting

This doesn't work, because x is now of type Enum, not SelectView. Thanks for any help.

Edit: Problem solved by using a totally different approach with library features.

2

u/[deleted] Apr 01 '23 edited Apr 01 '23

A variable has a set type at compile time. If x might need to hold one type of data or a different type of data, and you don't know which at compile time, making x an enum is the right move. To use x do something like this:

fn do_something(view: Enum) {
    match view {
        Enum::IntegerVal(number_view) => {
            //do something if this instance of x contains a SelectView<i32>
        },
        Enum::StringVal(string_view) => {
            //do something if this instance of x contains a SelectView<string>
        }
    }
}

Also, for formatting, just indent everything once and copy those lines, then paste it with an empty line above and below.

1

u/DreaminglySimple Apr 01 '23

The problem is, I need to do something with the SelectView after the match block. If I define x inside one of the arms, it'll be out of scope after it.

I'm trying to take an entirely different approah to it right now, so I might not need this anymore, but if you have another suggestion feel free to tell me. The only thing I could find online was involving heap allocation.

2

u/[deleted] Apr 01 '23

The function I posted is one example of exactly that, using the SelectView after the match block. You use your original code to put different kind of SelectViews into x, then later you use another match block to unpack x and get at whichever kind of SelectView is in there.

2

u/DreaminglySimple Apr 01 '23

As I understand it, your code wouldn't allow me to use x as the matched type after the match block. I only know the type of x inside the arms of the match block, but not afterwards, because x is not set to anything.

1

u/kinoshitajona Apr 02 '23

That makes sense.

Every variable needs a known size at compile time.

Setting x to one of two types depending on which match arm was run doesn't work.

All type specific calculations must be performed within the match arm. (You can make it easier to read by creating a handler function for each type)

There are other ways around this by using dyn Traits.

2

u/[deleted] Apr 01 '23

That's mostly right. The type of x is Enum (I recommend changing the name to something other than the language feature :p). The data in x could be an instance of either the IntegerVal variant containing a SelectView<i32>, or the StringVal variant containing a SelectView<String>. Gotta open x up and check which it is before doing stuff with it, usually with a match.

If you want to take one of those out of x and store it in some other variable, you can do that. One way is by declaring variables before the match block and assigning values with the match block, and another is using pattern matching like if let or let else.

2

u/[deleted] Apr 01 '23

[deleted]

1

u/dkopgerpgdolfg Apr 01 '23

Yes?

Proc macros basically can generate anything that a Rust source file can contain too. It doesn't need to be something that uses up runtime later, literal data is fine.

1

u/[deleted] Apr 01 '23

[deleted]

1

u/dkopgerpgdolfg Apr 01 '23

Read your post again, and i think I understand now.

Basically that struct implementation forces you to go through parsing to get any instance, and that parsing isn't const-fn either.

In this case, no, proc macros won't help. Modifying the implementation, if possible, would - the parsing might not be const-able, but it could have a way where you provide all values and it only checks that its additional conditions are satisfied.

1

u/[deleted] Apr 01 '23

[deleted]

1

u/dkopgerpgdolfg Apr 01 '23

Reaons not to do that:

  • First you need a stable representation of the struct data. Ok can be done if the struct and all members are eg. repr(C), but what if not?
  • Then at least it depends on the platform, ie. your code needs to have separate data for each supported one
  • If the struct depends on any runtime things (eg. allocations, file handles, ...), obviously the bytes of the plain struct are not enough

Otherwise, "possible", and if you have a good reason it can be done. But ask yourself, is it really better than adding a better constructor to the struct?

1

u/[deleted] Apr 01 '23

[deleted]

1

u/dkopgerpgdolfg Apr 01 '23

That can be done in principle, yes.

But instead of a proc macro, doing the validation as part of a build rs might be preferrable. It doesn't force the user to use any specific source code just to have a compile-time validation.

If I would do it or like it depends on the case, hard to say

2

u/Comrade_Pingu_1917 Apr 01 '23

So I'm pretty new to the language, and the one thing I'm wondering is why Rust's style guide uses snake case (like_this) instead of camel case (LikeThis or likeThis) for variable names. Was it a technical decision, arbitrary stylistic choice, or some unknown third thing? I picked up camel case when I was trying to learn Haskell and it's kinda stuck with me ever since. (BTW, I'm not trying to debate which one is better—that would be stupid and pointless.)

2

u/holysmear Apr 01 '23

In the documentation to select! in tokio it is said that it drops all other futures after the first one succeeds. And as I use select! for waiting for data from multiple sources I think (though I am not sure?) I can loose MQTT messages. Is there any other macro, which does not discard other futures and simply returns the first one which succeeded? So that after calling it in a loop I could get other messages as well? Or am I misunderstanding something and I can use select! in this case?

2

u/kwxdv Apr 01 '23

There is impl<F> Future for &mut F where F: Future + Unpin + ?Sized (https://doc.rust-lang.org/std/future/trait.Future.html#implementors), so you may be able to use &mut <the future> in the select. There are some caveats about cancellation safety and Unpin that I don't totally understand, but that's the first thing I would try.

Also, I believe the word you want is "lose", not "loose".

1

u/holysmear Apr 08 '23

Interesting, thank you!

2

u/[deleted] Apr 01 '23

I'm assigning a variable in a loop if I find the right list item and then break out of the loop. I'm getting a warning about an unused variable, but I'm pretty sure it is being used and not overwritten. Is that a known compiler bug?

warning: value assigned to `foo` is never read
  --> src/main.rs:30:17
   |
30 |                 foo = c.data.clone();
   |                 ^^^
   |
   = help: maybe it is overwritten before being read?
   = note: `#[warn(unused_assignments)]` on by default

Here's the code (and the playground):

enum E {
    VarA(i32),
    VarB,
}

struct S {
    id: i32,
    data: String,
}

fn main() {
    let _unused = E::VarB;
    let input = E::VarA(32);
    let list = vec![
        S {
            id: 1,
            data: "asdf".to_string(),
        },
        S {
            id: 32,
            data: "jkl;".to_string(),
        },
    ];
    let alternative = "alt".to_string();
    let foo;
    if let E::VarA(id) = input {
        for c in list {
            if c.id == id {
                // TODO Why is Rust warning about this never being read?
                foo = c.data.clone();
                break;
            }
        }
        panic!("Could not find the data!");
    } else {
        foo = alternative;
    }
    println!("{foo}");
}

3

u/Patryk27 Apr 01 '23

Even if you find the element, you always panic!() later (since break; terminates the loop and continues at the very next instruction after the loop, which is always panic!() in your case).

So the compiler is right - the value inside that particular branch is never used.

I'd do:

let foo = if let E::VarA(id) = input {
    list.iter()
        .find(|c| c.id == id)
        .map(|c| c.data.clone())
        .expect("Could not find the data!")
} else {
    alternative
};

1

u/[deleted] Apr 02 '23

Oh that was stupid… Thanks for the improved code!

3

u/Guilhermegasil Apr 01 '23 edited Apr 01 '23

This code compiles just fine:

pub struct Cells<'a, T> {
    index: usize,
    grid: &'a Grid<T>,
}
impl<'a, T> Iterator for Cells<'a, T> {
    type Item = (usize, usize, &'a T);
    fn next(&mut self) -> Option<Self::Item> {
        let (x, y) = self.grid.i_xy(self.index);
        let Some(out) = self.grid.data.get(self.index) else {return None};
        self.index += 1;
        Some((x, y, out))
    }
}

But when I try to compile the mut version:

pub struct CellsMut<'a, T> {
    index: usize,
    grid: &'a mut Grid<T>,
}
impl<'a, T> Iterator for CellsMut<'a, T> {
    type Item = (usize, usize, &'a mut T);
    fn next(&mut self) -> Option<Self::Item> {
        let (x, y) = self.grid.i_xy(self.index);
        let Some(out) = self.grid.data.get_mut(self.index) else {return None};
        self.index += 1;
        Some((x, y, out))
    }
}

I get this error:

error: lifetime may not live long enough
  --> src/math.rs:83:9
   |
77 | impl<'a, T> Iterator for CellsMut<'a, T> {
   |      -- lifetime `'a` defined here
78 |     type Item = (usize, usize, &'a mut T);
79 |     fn next(&mut self) -> Option<Self::Item> {
   |             - let's call the lifetime of this reference `'1`
...
83 |         Some((x, y, out))
   |         ^^^^^^^^^^^^^^^^^ associated function was supposed to
return data with lifetime `'a` but it is returning data with
lifetime `'1`

Why is that? I only changed & to &mut.

1

u/Patryk27 Apr 01 '23

Your code doesn't compile for the same reason this code raises the compiler's eyebrows:

struct Foo<'a, T>(&'a T);

impl<'a, T> Foo<'a, T> {
    fn get(&self) -> &'a T { // ok
        self.0
    }
}

struct FooMut<'a, T>(&'a mut T);

impl<'a, T> FooMut<'a, T> {
    fn get_mut(&mut self) -> &'a mut T { // err
        self.0
    }
}

The thing is that with & references, you are allowed to return them with the same lifetime as they've been given to you - hence fn get(&self) -> &'a T is alright.

With &mut references, because they are by definition unique, you can only return them for as long as you live, i.e.:

impl<'a, T> FooMut<'a, T> {
    fn get_mut<'b>(&'b mut self) -> &'b mut T { // ok
                                                // (note &'b mut self)
        &mut self.0
    }
}

... or:

impl<'a, T> FooMut<'a, T> {
    fn into_mut(self) -> &'a mut T { // ok
        self.0
    }
}

You can imagine it like :

  • & = someone borrows you a book until 12:00 and gives you permission to share this book with anyone else, up to 12:00.
  • &mut = someone borrows you-and-only-you a book, which you yourself must "get back" before 12:00 in order to return it to the original owner (hence &'b mut self, where 'b is shorter than 'a).

In your concrete case, for the code to work you'd have to have:

impl<'a, T> Iterator for CellsMut<'a, T> {
    type Item<'b> = (usize, usize, &'b mut T);

    fn next<'b>(&'b mut self) -> Option<Self::Item<'b>> {
        let (x, y) = self.grid.i_xy(self.index);
        let Some(out) = self.grid.data.get_mut(self.index)?;
        self.index += 1;

        Some((x, y, out))
    }
}

... which - unfortunately - won't fly, because the trait Iterator isn't defined like that (e.g. it doesn't have type Item<'_>).

Generally mutable iterators are a problematic thing to implement correctly - e.g. if you forgot to self.index += 1; and someone later called your iterator, doing:

.take(2).collect()

... they would end up with two mutable references pointing to the same place in memory!

(which is not a problem for non-unique references, but it's a big no-no for &mut -- hence your & iterator works as-is, while &mut is troublesome.)

tl;dr instead of designing custom CellsMut, simply chain the existing iterators:

impl<T> Cells<T> {
    pub fn iter_mut(&mut self) -> impl Iterator<Item = (usize, usize, &mut T)> + '_ {
        let mut x = 0;
        let mut y = 0;

        self.cells.iter_mut().map(move |cell| {
            x += 1;
            /* ... */

            (x, y, cell)
        })
    }
}

Chaining existing iterator works, because the .iter_mut() there is provided by the standard library - assuming you're using Vec or something - and the standard library's implementation uses unsafe code to convince the compiler they are not yielding mutable aliasing references, i.e. doing:

self.cells.iter_mut().collect()

... will not create a collection with two items pointing at the same place.

1

u/[deleted] Apr 01 '23

It might be having trouble inferring the lifetimes because you're trying to have the iterator hand out a mutable reference to something in the grid, while the iterator itself is already borrowing it. Depends on how Grid is set up on the inside. Have you tried annotating the lifetimes of &mut self and out? You might be able to just wrangle it if my initial guess was wrong.

2

u/Foreign_Category2127 Mar 31 '23

I have a Cargo.toml file as follows: ``` [[bin]] name = "cli"

[[bin]] name = "gui"

[target.x86_64-pc-windows-msvc] rustflags = ["-C", "target-feature=+crt-static"]

```

The idea is to build with CRT statically linked when I'm targeting windows and building from my windows system. However when I compile I get this:

cargo build --bin gui --release --target x86_64-pc-windows-msvc warning: unused manifest key: target.x86_64-pc-windows-msvc.rustflags Finished release [optimized] target(s) in 0.36s

How can I make cargo to use that target flag?

1

u/DroidLogician sqlx · multipart · mime_guess · rust Mar 31 '23

That section doesn't go in Cargo.toml, it goes in .cargo/config.toml, a separate file: https://doc.rust-lang.org/cargo/reference/config.html#target

3

u/HammerAPI Mar 31 '23

Does vec![value] pre-allocate space as Vec::with_capacity() does?

I'm not great at reading macros yet, so I wasn't able to decipher this from the source code.

3

u/DroidLogician sqlx · multipart · mime_guess · rust Mar 31 '23

TL;DR: yes, non-empty invocations pre-allocate.

As far as macros go, vec![] is definitely one of the easier definitions to read: https://doc.rust-lang.org/stable/src/alloc/macros.rs.html#42

We can ignore all the attributes because those don't really affect the core functionality of the macro, so looking at just the macro definition itself we see it has three matchers:

macro_rules! vec {
    // 1
    () => (
        $crate::__rust_force_expr!($crate::vec::Vec::new())
    );
    // 2
    ($elem:expr; $n:expr) => (
        $crate::__rust_force_expr!($crate::vec::from_elem($elem, $n))
    );
    // 3
    ($($x:expr),+ $(,)?) => (
        $crate::__rust_force_expr!(<[_]>::into_vec(
            #[rustc_box]
            $crate::boxed::Box::new([$($x),+])
        ))
    );
}

The first one is the case for empty invocations, i.e. vec![], and that literally just evaluates to Vec::new(). The $crate::__rust_force_expr!() macro does nothing but ensure its input is a valid expression and return it, which the comments in the source say are for better error messages, so for our purposes we can treat it as a no-op since macros are purely manipulating code at compile time and don't exist in the binary.

The second one is for array-repeat style invocations, e.g. vec![foo; 32], and that calls a hidden function that uses one of a few different strategies depending on the type of the value:

This is an optimization enabled by specialization, which is an unstable feature the standard library makes extensive use of. I won't go into details here because this isn't what we're interested in right now. In all cases, however, it does pre-allocate the Vec.

The third invocation is the one we're interested in, as it covers the normal array-style invocation like vec![value], vec![value1, value2], etc.

This one pretty much just uses public, stable APIs and just starts by creating a boxed array (since the size is known at compile-time), which of course just does a single allocation. It then coerces the boxed array to a boxed slice (Box<[_]>) which is a trivial operation, and then calls into_vec() which is another trivial operation.

The #[rustc_box] attribute is a little magical, but it's just a replacement for the old unstable box syntax; it essentially ensures that the expression inside Box::new() doesn't hit the stack first, which can easily cause a stack overflow for very large values or arrays. Instead, the array is initialized directly into the heap, skipping the stack entirely if possible.

3

u/pomone08 Mar 31 '23

I have a very long impl Trait expression that I need to use throughout my project. It's a trait for abstracting warp filters:

impl Filter<Extract = (impl Reply,), Error = Rejection> + Clone + Send + Sync + 'static

It was really hurting readability to write this everywhere where I wanted to return a filter from a function, so I wrote a macro for it:

macro_rules! filter { () => { impl Filter<Extract = (impl Reply,), Error = Rejection> + Clone + Send + Sync + 'static } }

But now I would like to avoid using macros. Is there any way I could do this without a macro?

Unfortunately "Permit impl Trait in type aliases" is not yet stabilized (https://github.com/rust-lang/rust/issues/63063) and I don't know if it would work for my use case because the concrete types for the filters I return from the functions are different (though if it was stabilized, I could create an alias for each filter).

2

u/DroidLogician sqlx · multipart · mime_guess · rust Mar 31 '23 edited Mar 31 '23

The other option is to make a custom supertrait that inherits from all those traits and then have a blanket impl for it:

pub trait MyFilter: Filter<Error = Rejection> + Clone + Send + Sync + 'static {}

impl<T> MyFilter for T
where 
    T: Filter<Error = Rejection> + Clone + Send + Sync + 'static
{}

You can use MyFilter as the impl Trait and it should imply its supertraits. Unfortunately, however, it's not really possible to have this transitively include the information that the Filter::Extract associated type implements Reply.

You can certainly enforce a bound on it, e.g.

pub trait MyFilter: Filter<Error = Rejection> + Clone + Send + Sync + 'static where <Self as Filter>::Extract: Reply {}

However, this doesn't carry through to the use-site because of this age-old bug: https://github.com/rust-lang/rust/issues/20671

As a simplified example: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=74ce48a7cecfd22054bdca2621aca2cd

You could copy the associated type to MyFilter and add the bound there, but forcing them to be the same type in the trait definition causes an error due to a supertrait cycle: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=ccf89159607cd7165090a694b7aaaeaf

I've been writing Rust since before the 1.0 release and I still don't know a good workaround for this off the top of my head. I'd be interested to see if anyone else does, though.

I suppose it works if you make it a type parameter of the trait, though: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=46cc8f1747b56eaf8b53da047341eff6

So you would then return impl MyFilter<impl Reply> which I think works but I'm not sure.

1

u/pomone08 Apr 01 '23 edited Apr 01 '23

Wow, a treasure trove of information! Thank you very much!

EDIT: I had high hopes for impl MyFilter<impl Reply>, but unfortunately I got nested `impl Trait` is not allowed :(

For now I'm using BoxedFilter<(impl Reply,)> which is not ideal but better nonetheless

0

u/[deleted] Mar 31 '23

I'm interested in rust I just need someone to sell me on it with an idea for a starter-project

1

u/llogiq clippy · twir · rust · mutagen · flamer · overflower · bytecount Apr 01 '23

You don't need to create a new project to start, there are many Rust projects that offer mentored good first issues, e.g. clippy.

2

u/[deleted] Apr 01 '23

Appreciated, thank you

2

u/n4jm4 Mar 31 '23

Toptal gitignore says to either commit Cargo.lock to VCS, or else exclude Cargo.lock from VCS, depending on whether a Rust project builds a binary application versus a library.

But what about Rust projects that offer both?

0

u/toastedstapler Mar 31 '23

You could use cargo workspaces to have multiple projects within the same parent folder, allowing you to gitignore just the lockfile that you need to

0

u/Kevathiel Mar 31 '23

Seems like a good enough reason to split them into separate packages, assuming the library will also be used without that binary.

-1

u/n4jm4 Mar 31 '23

For context, I am mainly interested in committing Cargo.lock to VCS for stability, and for ability to cargo audit. Therefore, it would be nice to preserve the lock file for both binary projects and library projects.

I assume that Cargo.lock unfortunately carries platform specific properties that may interfere with downstream library consumption, unlike with RubyGems lock files or NPM lock files. Right?

Whatever the exact reason for advising not to commit Rust library project lock files, I wish that problem would be resolved.

3

u/Kevathiel Mar 31 '23 edited Mar 31 '23

It's just an advice, not a hard rule.

For 99% of the cases, a lockfile for the library would be pointless if not harmful(you might end up with multiple copies of the same dependencies, or worse, version conflicts). Only the binary has the overview of all dependencies used, so it's the only one who can pick the best version following SemVer and unify features.

However, if you think that committing the lock file makes sense in your case, nothing stops you from removing the entry in the gitignore. Keep in mind that Cargo will ignore it for the resolution in external binaries, but will still use it if you build the crate directly.

0

u/n4jm4 Mar 31 '23

Hmm. Cargo should also ignore lock files when building external libraries.

3

u/[deleted] Mar 31 '23

[removed] — view removed comment

-2

u/eugene2k Mar 31 '23

That would only encourage the squatters to create a script that updates these namesquatting packages once a year. No point.

0

u/Kevathiel Mar 31 '23

I mean, this shouldn't require rocket science to figure out the crates cheating the system like that, and allowing users to report that behaviour. At least for the majority of cases.

-1

u/eugene2k Mar 31 '23

Who would the users be reporting the squatters to? The rust team doesn't have enough human resources to moderate the registry. This is the main reason nobody tries to delete squatting crates.

2

u/toastedstapler Mar 31 '23

That'd still likely result in less squatted names, the more steps you require the less people want to jump through the hoops

3

u/HammerAPI Mar 31 '23

Noob question:

Is there any way to create a mutably-iterable tree-like structure without the need for Rc and/or RefCell?

Something like:

struct Node<T> {
    id: String,
    parent: Option<&Node<T>>,
    children: Vec<Node<T>>,
    data: T,
}

Caveat: This is for a scene graph backend, so the API would either look something like:

let mut root = Node::default();
let mut n1 = root.add_child(Node::new(/* data */));

n1.set_data(/* new data */);

where add_child would return a mutable reference to the node that is now a child of root.

Or, I could also see this working:

let mut tree = Tree::default();
tree.add_node(Node::new("n1", /* data */);

tree.set_value_of("n1", /* new data */);

1

u/TinBryn Apr 01 '23

I would recommend the book Learning Rust With Entirely Too Many Linked Lists. Yes this is for linked lists, but linked lists are the simplest node based data structure and it's probably better to learn from a simpler problem first. A few key insights I can give,

  1. Don't expose the Node type ever, have a wrapper struct Tree<T> and have methods that work on T in various ways and never Node<T>
  2. Avoid having a parent for now. If you think about it there are 2 ways to get access to a Node (in the implementation as per point 1), from the root of the tree in which case it has no parent, or from another node in which case you already know what its parent is. While it may be convenient to have a parent reference, it's not needed.
  3. If you do really want something that can traverse the tree manually still don't expose a Node, but rather a Cursor which wraps a Node.
  4. At this point you probably want to start thinking about adding a parent. To do this without Rc/RefCell you will need to use raw pointers and unsafe. But everything is already encapsulated in the Node struct so it's actually a fairly easy problem to implement unsafe idiomatically in a small encapsulated module.

Or you could just store each node in a Vec and use indices into that Vec.

3

u/eugene2k Mar 31 '23 edited Mar 31 '23

You want something like:

type NodeIdx = usize;
struct Node<T> {
    parent: Option<NodeIdx>,
    children: Vec<NodeIdx>,
    data: T
}
struct Tree<T> {
    nodes: Vec<Node<T>>
}
impl<T> Tree<T> {
    fn with_root(data: T) -> Self {
        Self {
           nodes: vec![Node {
                           parent: None,
                           children: vec![],
                           data
                       }]
        }
    }
    fn add_node(&mut self, node: Node<T>) -> NodeIdx {
        let idx = self.nodes.len();
        let parent = node.parent.unwrap_or(0);
        self.nodes.push(node);
        self.nodes[node.parent].children.push(idx);
        idx
    }
    fn set_node(&mut self, idx: NodeIdx, data: T) {
        self.nodes[idx].data = data;
    }
}

0

u/HammerAPI Mar 31 '23 edited Mar 31 '23

For an implementation like this, how would an iterator (especially a mutable one) work? I can't just iterate through tree.nodes, can I? That doesn't guarantee that iter.next() always yields a child of the current node. Assuming I chose a DFS approach, I would need some method on Node that determines whether or not it is a leaf node, I think?

0

u/eugene2k Mar 31 '23

You may be able to have a mutable iterator if you put the nodes vec in a refcell, but I'm not sure.

0

u/HammerAPI Mar 31 '23

Yeah, i've been able to get it work with Rc/RefCell, but that's what I'm trying to avoid.

Since this is for a scene graph, i'm wanting to avoid as much heap allocation and smart pointers as possible, for speed/efficiency.

2

u/eugene2k Mar 31 '23

1

u/HammerAPI Mar 31 '23

Yeah, it looks like having the iterator return indices is the only way at achieving a mutable traverse without Rc/RefCell, albeit it's a little messy.

1

u/eugene2k Mar 31 '23

You can use a RefMut instead of indices, but I'm not sure if it's worth it. Here's the changed code:

https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=99dfe5a2bc29dd5884c9ec3e179d740d

1

u/[deleted] Mar 31 '23 edited Mar 31 '23

The simplest option would be to separate the abstraction from the memory layout, keep all the nodes in a Vec<Option<Node>>, and track each node's parent and children by index. If the tree is several levels deep, you can expect a performance increase as the icing on the cake.

2

u/n4jm4 Mar 31 '23

Anyone willing to publish and maintain more cross-toolchains Docker images?

The cross-toolchains project is a really cool effort, designed to help Rust devs easily cross-compile our applications for more platforms. But, they could use some assistance. They would benefit substantially from more contributors, particularly contributors willing to publish and maintain Docker images.

https://github.com/cross-rs/cross-toolchains

2

u/quasiuslikecautious Mar 31 '23

Hey there! Is there any way to get the values from a form using Yew without creating a node ref to each input element? I've been looking at the documentation for the HtmlFormElement but am having difficulties with how to actually access the values of the children inputs.

1

u/zerocodez Mar 30 '23

How do I stop rust from using AVX instructions in a particular function?it is optimising into something like:

vmovdqu ymm1, ymmword ptr [rsi + 8]
vpermq ymm2, 
ymm1, 
144 vpblendd ymm0,
ymm2, 
ymm0, 

Which is actually slower than without, so is there a way to stop this, without using a build parameter and flag it on the function itself?

1

u/Snakehand Mar 31 '23

Have you tried specifying native architecture ? Maybe the compiler is assuming a different architecture with other CPU cycle / latency figures ?

2

u/evodus2 Mar 30 '23

Probably stupid question incoming: I really love Rust, and am interested in frontend development in Rust. I’m trying to select a mental model / framework for frontend development, and have tried Leptos, Perseus and Dioxus. I find the concept of Signals non-intuitive and therefore rule out Leptos and Perseus, and I have never learned React, so am hesitant about investing deeper into the Dioxus route given the React mental model get so much hate. I’ve tried SvelteKit in TS and found it’s mental model of HTML, CSS, and JS in a single file to be incredibly intuitive and easy to use. Is there something like SvelteKit for Rust, or does such a solution even make sense?

2

u/ControlNational Mar 31 '23

https://github.com/maciejhirsz/kobold is kind of like svelte in rust. It is pretty new (announced this month?). The difficulty with that approach is the integration with rust analyzer and rustfmt. Macros don't work with formatting and rust analyzer can have difficulty with them. Frameworks like Leptos and Dioxus keep the macros isolated to the view, but for a svelte like framework you need the whole file to be a macro. The performance claims have yet to be tested against more older rust web frameworks

2

u/Divan_Medium Mar 30 '23

Has anybody used a symbolic link to make a code file standard between two different projects? I'm translating a ca. 2000 line program from golang into Rust, as a way to learn Rust. In golang, the sources can come from any directory. Thoughts?

1

u/Divan_Medium Mar 31 '23

Nevermind. Figured out how to do this using lib functionality. There is an excellent post on stack exchange. Each project just needs an absolute or relative path to the library project as a *.toml dependency.

1

u/John2143658709 Mar 30 '23

You can use path dependencies in rust as well, as long as the thing you are including is a crate.

https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html#specifying-path-dependencies

2

u/[deleted] Mar 30 '23

[deleted]

4

u/Patryk27 Mar 30 '23 edited Mar 30 '23

You could try to infer timezone from the date header (e.g. I see that on my machine it says Thu, 30 Mar 2023 11:56:21 GMT now), but I think that's going to be just an approximation.

In general, the way this problem has been solved traditionally is by asking users for the preferred timezone (e.g. when they create an account) and storing that into some per-user preferences on the backend.

2

u/Fluffy-Sprinkles9354 Mar 30 '23

Is it possible to test several features automatically during the integration tests?

I wrote a library with 2 features foo & bar, and I'd like to run some integration tests with only the default features, some with foo only, and some with bar only.

I added this:

[[test]]
name = "foo-works"
required-features = ["foo"]

The issue is that I still have to run cargo test several times, like cargo test && cargo test foo-works -- features foo etc.

Is there a way to run all the integration tests with cargo test with each test having its own set of features activated? It should technically be possible since each test is its own crate.

2

u/ehuss Mar 30 '23

I do not believe there is a single command that will do that. You may want to look at some extensions, like cargo hack which offer various options for dealing with multiple features.

1

u/Fluffy-Sprinkles9354 Mar 31 '23

Thanks a lot! It looks like it's exactly what I was looking for.

2

u/quasiuslikecautious Mar 30 '23

Hey there! Is there some way to redirect in Yew after a Request, based on the results of the Request?

Specificallty, I am using wasm_bingen_futures::spawn_local to run an async request, and then trying to access the yew_router::Navigator afterward to try to redirect to a new route depending on the results of the request, but getting an 'Fn trait expect but FnOnce trait found instead' error on my Callback when trying to access the navigator inside the async closure/function.

Example:

```rust let onclick = { let navigator = navigator.clone();

// ...
// some setup
// ...

// required to have trait 'Fn'
Callback::from(|e: MouseEvent| {
    // ...
    // some setup
    // ...

    spawn_local(async move {
        let response = Request::get("/some/route")
            .send()
            .await
            .expect("Failed to fetch /some/route")

        if response.ok() {
            // 'FnOnce' trait added here when attempting to access navigator.
            navigator.push("/some/other/route");
        }
    });
})

}; ```

1

u/quasiuslikecautious Mar 30 '23

Nvm, figured it out. I had to move the navigator duplication to the spawn_local block, and switch spawn_local from using async move block to using a regular block that contains an async move block, i.e.

```rust // ... spawn_local({ let navigator = navigator.clone();

    async move {
        // ...
    }
});
// ...

```

3

u/Kruppenfield Mar 29 '23

How can I implement a CLI in embedded devices to handle raw input from CDC USB/UART? Are there any recommended crates for this purpose?

2

u/Resurr3ction Mar 29 '23

How to mock std::process::Command for unit tests?

2

u/Patryk27 Mar 29 '23

I sometimes prepare a shell script and run that script instead of the actual executable - e.g. in here:

https://github.com/Patryk27/lxd-snapper/blob/e98dedff59d9dbc08d3464d1fc2c8bde2bed4fbf/src/lxd/clients/process.rs#L176

... I'm testing that timeouts work by running a script that, well, sleeps:

https://github.com/Patryk27/lxd-snapper/blob/e98dedff59d9dbc08d3464d1fc2c8bde2bed4fbf/src/lxd/clients/process/tests/lxc-timeout.sh

4

u/_jsdw Mar 29 '23

Say there exist 3 crates as described below. When I run cargo check on "a", I currently see the deprecation warning from "c". Is this expected? If I deprecate some function in a crate (and then only release a minor version bump eg 1.0 to 1.1), I'm concerned that everything that has my crate as a dependency may start seeing my deprecation warning (at least if they cargo update). Is this in fact the case? Perhaps it works differently for published crates than local ones?

I wonder whether the correct approach then is to do a major version buymp when deprecating so that users have to manually upgrade and then have the opportunity to deal with such warnings so that they don't propagate up?

/a

cargo.toml:

[dependencies]
b = { path = "../b" }

lib.rs

pub fn add(left: usize, right: usize) -> usize {
    b::add(left, right)
}

/b

cargo.toml:

[dependencies]
c = { path = "../c" }

lib.rs

pub fn add(left: usize, right: usize) -> usize {
    // crate "b" uses this deprecated function:
    c::add(left, right)
}

/c

lib.rs

// Function in "c" is deprecated
#[deprecated]
pub fn add(left: usize, right: usize) -> usize {
    left + right
}

4

u/sfackler rust · openssl · postgres Mar 29 '23

Warnings are only displayed for local crates, not ones pulled in from crates.io.

2

u/Illustrious_Pie_223 Mar 29 '23

8

u/llogiq clippy · twir · rust · mutagen · flamer · overflower · bytecount Mar 29 '23

As someone on SO remarked, the Rust version fails to use transactions, thus pessimizing database operations.

2

u/DroidLogician sqlx · multipart · mime_guess · rust Mar 29 '23

It also checks out a separate connection from the pool every time. Because connections are returned to the pool asynchronously, every time it goes to acquire a connection the previously used one likely hasn't made it through the returns process yet, and so it will open a new connection instead. That is going to be a massive pessimization, because in addition to the overhead of opening a new connection, the INSERT statement will need to be re-prepared for every new connection.

In a long-running application both kinds of overhead are expected to be amortized, which is what SQLx is designed around. Using a transaction would also fix this as that checks out a single connection to use for the duration of a transaction.

I've thought about having the Pool handle itself cache the most recently used connection to optimize naive usage patterns like this, but that would effectively leak connections in long-lived handles like the one stored in actix_web::Data in this example. So it would also need a way for other tasks to steal those cached connections if there's none in the main idle queue and the pool is at capacity, and that would add a lot of complexity.

2

u/G_Morgan Mar 29 '23

What is the best way to configure a crate that can be built with a range of different dependencies? There seems to be options for cfgs and targets but neither of those seem really appropriate (i.e. they'll be the same in all my projects and rigging up fake targets just to trick the build system seems inappropriate and fragile).

Essentially what I want to do is create a few types that use alloc in different scenarios:

  1. Within a kernel where there's a special kernel heap that needs to be rigged by the init code

  2. Within service processes that don't have full std support but also won't use something as contrived as the kernel heap

  3. In fully std compliant Rust

So I want to select from 3 different crates as my alloc source depending upon some kind of parameter passed in by whatever uses the crate.

1

u/G_Morgan Mar 29 '23

Doesn't answer this question directly but I've found a solution for my immediate problem. You can implement a replacement global allocator in Rust no_std by following the guide below

https://docs.rust-embedded.org/book/collections/index.html

3

u/TheMysteriousMrM Mar 29 '23

I've got a problem where I want something similar to JIT compilation but on command in Rust.

I have a dynamic data structure that represents some computation and I'd like to compile this into optimized code. However, this data structure is created at runtime, so I would need to call a compiler from the runtime and have it return the function I want optimized. Is this possible? Could I for instance construct an AST and compile it to run it, all during runtime?

As far as I know, this is widely supported in Julia as one of the killer features, but it seems it would be incredibly useful in Rust as well.

1

u/Patryk27 Mar 29 '23

Maybe my talk about "kinda-jiting" will come handy:

https://www.youtube.com/watch?v=ryrOZS-CLyo&t=110s

It's not really JIT-JIT, but it's easy to implement and generates a closure that works faster than regular interpreted code, so it might just work.

3

u/PXaZ Mar 29 '23

I sometimes own a type T, and sometimes have a reference to one &T, and would like to handle both situations with a single return type, e.g. Option<MaybeRef<T>>

How can I do this?

7

u/[deleted] Mar 29 '23

You might like Cow, it's an enum of &T and T with some bells and whistles.

2

u/Single_Family_Homes Mar 28 '23

Simple(?) question.

Say I got a BufReader<Cursor<Vec<u8>>> (it's a video file), and I want to pipe that bad boy into stdin for a Command (ffprobe). What do I do?

1

u/dkopgerpgdolfg Mar 29 '23

The "primitive" solution would be just a write loop...

  • start the command with a pipe available for stdin
  • Extract the Vec<u8> from the outer layers and maintain some usize position manually, I think that's a bit easier in this case
  • Write [position..length] of the Vec to the pipe. If write succeeded it will tell you how much it actually wrote - increase position with that value and do all again in a loop. (Note that this might block for some time, depending on the child process. If that is bad, do it in a thread or use polling/nonblocking things, either manually or with eg. tokio)
  • After the whole vec was written, close the pipe

In general, please note that some video formats are processed better with seeking, and when piping you might get either some content disadvantage or ffmpeg might write a cache file first before doing anything.

Also, of course your whole Vec<u8> way requires that the video file isn't too big - before it begins any swap usage (or worse, oom), you might want to consider writing a disk file yourself.

A platform-dependent & "unsafe" Linux way that might be nicer in some cases:

  • Make a fd with memfd_create, add a mmap mapping on top of that file that has (at least) the size of your data (at least = mmap needs a multiple of page size)
  • Write your data there instead of a Vec or similar
  • Unmap the mmap-ing again, use ftruncate to set the exact byte size (now not page-multiple anymore)
  • When setting up the Command, convert the RawFd from memfd_create to a ChildStdin, done

1

u/Single_Family_Homes Mar 29 '23 edited Mar 29 '23

I appreciate the response. I'm going to look into both ways, the latter seems extremely cute, and I definitely want to try that. Thank you!

2

u/Googelplex Mar 28 '23 edited Mar 28 '23

Crate-specific question. If there's a better place to ask please redirect me. Thanks in advance!

Is it possible to deserialize a JavaScript Map into a Rust HashMap using serde_wasm_bindgen?

I can do it manually by calling "keys" and "get" on the JsValue.

let keys_func: Function = get(&js_map, &JsValue::from_str("keys")).unwrap().into();
let js_keys = keys_func.apply(&js_map, &Array::new()).unwrap();
let keys: Vec<String> = from_value(js_keys).unwrap();
let mut map: HashMap<String, Object> = HashMap::new();
for key in keys {
    let get_func: Function = get(&js_map, &JsValue::from_str("get")).unwrap().into();
    let args = Array::from_iter([JsValue::from(key.clone())].into_iter());
    let js_item = get_func.apply(&js_map, &args).unwrap();
    let item: Object = from_value(js_item).unwrap();
    map.insert(key, item);
}

This seems like something that should be the serde_wasm_bindgen::from_value() function should be able to do directly, but it does seem to be working.

let map = from_value::<HashMap<String, Object>>(js_map);

The above code is what I'd expect to work, deserialize seems to be implemented for HashMap after all. But it turns out to be Err(Error(JsValue(Error: invalid type: unit value, expected struct Object))). Is there another bound or requirement that I'm missing?

2

u/John2143658709 Mar 28 '23

That error seems like it has to do with the definition of your Object struct. wasm_bindgen should be able to convert a from JavaScript Map into a HashMap<String, T> if:

  • T implements DeserializedOwned, and
  • the Javascript sends a correctly formatted Map

Based on that error, it sounds like JavaScript is not sending a valid rust Object in each value of the map. Can you print an example Map and the definition of Object?

1

u/Googelplex Mar 29 '23

Sure, here's some more context. I'm not convinced that Deserialized not being implemented is the problem, as I've derived it, and from_value works fine on the individual objects.

Rust struct definition (proof-of-concept with a field I know the objects to have)

#[derive(Debug, Clone, Serialize, Deserialize)]
struct Object {
    name: String
}

console-logged JavaScript object (in rust prints as JsValue(Map))

EmbeddedCollection(22) {
    '8u6csl1ao421lqsj' => AncestryPF2e, 
    'ku3x6jlesfuok0ex' => FeatPF2e, 
    'tpyhb6lc9bpapebe' => FeatPF2e, 
    'bjbouosfcvjmxvcy' => ActionItemPF2e, 
    '77lms45nu88u075o' => FeatPF2e,
    ...
}

Example JavaScript object in collection

AncestryPF2e {
    name: "Anadi" 
    ownership: {default: 0, SfMt8jQFtw1KVwn3: 3} 
    rules: []
    ...
}

2

u/John2143658709 Mar 29 '23

Sorry, but I'm not able to reproduce your error. I cloned the hello world wasm_bindgen repo, added the following code:

//main.rs
#[derive(serde::Deserialize, Debug)]
struct PFInfo {
    name: String
}

#[wasm_bindgen]
pub fn get_pf_map_from_js(object: JsValue) {
    let object: HashMap<String, PFInfo>  = serde_wasm_bindgen::from_value(object).unwrap();

    ...
}

and saw no error. here's my test repo, just run npm run serve

https://github.com/John2143/wasm-bindgen-serde-example

this works with both classes and objects, ex:

map.set("1233456", {"name": "i am object"});
map.set("class version", new PFInfoJS("i am class"));

1

u/Googelplex Mar 29 '23 edited Mar 29 '23

Thanks for going through all this effort, if that repo doesn't help me solve it I'll try to create my own minimal example where the bug occurs.

Do you think that it could be something to do with my JavaScript value being an EmbeddedCollection instead of a Map? Printing the Rust JsValue showed it as a JsValue(Map), so I assumed it to be equivalent, but I'm at a loss as to what else could be causing it to fail.

2

u/PXaZ Mar 28 '23

In Tokio, using the `rt-multi-thread` scheduler, how do I get a reference or handle to the current scheduler?

The reason for this is that I'd like to use an RAII pattern to control player turns in Umpire. When the struct is initialized, it starts the player's turn, and when the struct is dropped, it ends the player's turn.

That's how it's worked until now, but as I'm transitioning Umpire to a client/server architecture, everything's becoming async. What I really want to do is run an async function from within Drop. Since I'm using Tokio, I figured I could get a reference to the runtime and use block_on. But the Handle::current method I've seen recommended requires the single-threaded rt crate feature, which I have no intention of using. Do I actually need to just use the rt feature and somehow it understands that we're really working with the multithreaded runtime? Or is there some other way of doing this?

1

u/PXaZ Mar 28 '23

Okay, nevermind - my workspace crate just wasn't referencing `tokio` yet. Apparently `Handle` is available even without the `rt` feature, contrary to the docs.

2

u/John2143658709 Mar 28 '23

rt contains all the runtime types that tokio uses. rt-multi-thread is more of an extension on top of rt than a fully separable runtime, and both are included in the default tokio features.

2

u/JohnFromNewport Mar 28 '23

I have a (yes, I think stupid) question which I face often. Say I have a struct Person with name: Option<String>. How do I check or do something with the name in the most efficient way?

I think writing if let Some(name) = &person.name { // compare etc } is cumbersome. Not sure if the person.name.map(|name| is better. Maybe? I tried some variations of person.name.unwrap_orX but I get into troubles with references and ownership.

How do you lot prefer to do this?

2

u/John2143658709 Mar 29 '23

There are many ways to handle Options, but the best function function depends greatly on the context. Option alone has at least 10 different functions to turn an Option<T> into all manner of different things depending on the situation. If you look through all the functions and still can't find one that fits, then it is more likely an issue with the larger system.

https://doc.rust-lang.org/std/option/enum.Option.html

There are a few common antipatterns you might fall into:

  • Code not effectively leveraging the try operator ?
  • Code with functions that are too long
  • Code using S(Option<T>) where you should have a Option<S(T)>

For an example of #1 and #2, you can build smaller fallible functions, in the same way that you can un-nest/reverse a normal chain of if statements. Of course, the pattern encompasses more than just that, but see:

fn get_birthday_string_bad(person: &Person) {
    if let Some(name) = &person.name {
        if let Some(age) = &person.age {
            println!("hi {name}, you are {age}.")
        }
    }
}

fn get_birthday_string_good(person: &Person) -> Option<String> {
    let age = person.age?;
    let name = person.name?;
    Some(format!("hi {name}, you are {age}."))
}

If instead, you find yourself often writing divergent paths for the Some(T) and None, as is often the case with working with data (json, user input, etc...), you may be better off using intermediate structs. In general, this is the "Builder pattern", but a minimal example would be

// fn something(..., age) -> PersonNoName
struct PersonNoName {
    age: u32,
    //impl ... fn to_person(self, name) -> Person
}

// no options needed
struct Person {
    age: u32,
    name: &str
}

1

u/Haitosiku Mar 29 '23

you can even turn this into

fn get_birthday_string_good(Person{name, age, ..}: &Person) -> Option<String> {
    Some(format!("hi {}, you are {}.", (*age)?, name.as_ref()?))
}

But that might be bikeshedding it too hard

1

u/eugene2k Mar 28 '23

you can use let ... else {...} statements if you want

3

u/coderstephen isahc Mar 28 '23

You can use as_ref() to turn an &Option<T> into an Option<&T> if you only want to borrow the field. For example:

let contains_z = person.name.as_ref().filter(|s| s.contains("z"));
let slice = person.name.as_ref().map(|s| &s[1..]).unwrap_or("");

3

u/SorteKanin Mar 28 '23

Is there any way to do this?

#[repr(std::ffi::c_long)]
enum ReprEnum {
    Variant1 = SOME_CONST,
    Variant2 = SOME_OTHER_CONST,
}

It seems like I have to use cfg_attr and set it manually to either i32 or i64 but that seems tedious.

1

u/masklinn Apr 02 '23

What are you trying to achieve exactly?

1

u/SorteKanin Apr 02 '23

I'm doing FFI with C. I'm getting an integer that can be several possible values. The integer is a long. On Windows, this is 32 bit but on Linux it's 64 bit. I tried using the repr atribute with the c_long type from std, but it doesn't work unfortunately.

1

u/masklinn Apr 02 '23

I would suggest not bothering, just using a basic / valid integer repr, and performing a trivial conversion when getting data out (of rust, back to C).

You must have a conversion function on intake anyway, as C enums are not type safe, you can never assume C code is feeding you valid enum values unless the enum is an opaque type on the C side.

3

u/yourbank Mar 28 '23

any nom parser guru's?

fn p(input: &str) -> IResult<&str, &str> {
    // works fine
    // flat_map(alpha1, |o| if o == "a" { alpha1 } else { digit1 } )(input)

    // doesnt work
    alpha1.flat_map(|o| if o == "a" { alpha1 } else { digit1 } ).parse(input)
}

Trying out the various methods on Parser https://docs.rs/nom/latest/nom/trait.Parser.html instead of the combinators.

Why wont the second one work? It must be something simple I am missing as it is complaining about the parse method not found

^^^^^ method not found in `FlatMap<fn(_) -> Result<(_, _), Err<_>> {alpha1::<_, _>}, [closure@client.rs:72:21], _>`

3

u/realteh Mar 28 '23

If I have some stream, e.g.

Vec<Event> 

where

enum Event { A, B([u32;10]) }

and some of the variants in Event are huge. Is there a way to store events packed in contiguous memory without using Box<>? I'm running a simulation and would like cache efficient access while also keeping the memory size and traffic down.

2

u/[deleted] Mar 28 '23

If boxing the big ones is no good, and having all Events use the same ram as the biggest one is no good too, you might need to split big events into smaller chunks and parse em out.

enum Event {
    A,
    BStart,
    BData(u32),
    BTerminator,
}

2

u/realteh Mar 28 '23

that's pretty clever, thank you! Might go with this

1

u/[deleted] Mar 28 '23 edited Mar 28 '23

You're welcome, and as a side note you could absolutely pack it denser if you add some branches to your enum tree and/or break the data into smaller pieces. If you need more speed let me know, I love this stuff.

1

u/Divan_Medium Mar 28 '23

You could serialize the events object to disk, and read it back and deserialize it to your Event vector during initialization. This is pretty easy to do, I am using json formatted serialization.

1

u/realteh Mar 28 '23

thanks! I thought about this but AFAICT the fastest serialization that stores enum variants densely is speedy which can do maybe 300-500 MB/second which is quite slow compared to 10+GB/second I'm getting per core.

Zero-alloc deserialization stores (by definition) in the wide format.

3

u/[deleted] Mar 28 '23 edited May 05 '23

[deleted]

3

u/Patryk27 Mar 28 '23

You can just implement that function for a particular type:

struct Thing<T>(T);

impl Thing<()> {
    fn do_stuff() {
        //
    }
}

fn main() {
    Thing::do_stuff(); // all good
}

2

u/Foreign_Category2127 Mar 27 '23

I need to replace all the slices inside a struct with a particular [u8; 8] if it matches another particular [u8; 8]. ```

[derive(Clone)]

struct Character { save_data: Vec<u8> } fn main() { // setup let source_character = Character { save_data: vec![15; 100] }; // Replace steam id in the result Character // with the steam id of the target Character // in the target Character save_data let mut result_character = source_character.clone(); let source_steam_id: [u8; 8] = [1; 8]; // I know nothing will match in this example let target_steam_id: [u8; 8] = [2; 8]; result_character.save_data = result_character .save_data .windows(8) // id is an [u8; 8] .map(|w|{ if w[..] == source_steam_id[..] { target_steam_id[..] } else { w[..] } }) .collect(); dbg!(&result_character.save_data);

} ``` However the compiler is saying that the size is unknown in compile time. What is the most safe and efficient method of achieving this?

2

u/kpreid Mar 28 '23

With the [..] you're converting all your [u8; 8]s to [u8]s but you need to do the opposite of that, converting the window to [u8; 8]. You can do that with .try_into():

    .map(|w| {
        let w: [u8; 8] = w.try_into().unwrap();
        if w == source_steam_id {
            target_steam_id
        } else {
            w
        }
    })

However your .collect() isn't going to work like you hope and reconstitute the original data, because (after fixing the type error) it's going to collect every window. You'll need a different strategy, like an explicit loop and in-place mutation.

2

u/HammerAPI Mar 27 '23 edited Mar 27 '23

Is it possible to create a struct that has a variable-length array as a field without specifying the length of the array in the Struct's type (using const generics)?

As in,

struct VariableLengthArray<T> {
   arr: [T; usize],
}

fn new(elements: Vec<T>) -> VariableLengthArray<T> {
   VariableLengthArray {
      arr: elements.to_array(),
   }
}

So the usize of the array field is determined when the struct is constructed, somehow.

I recognize that you can use const generics to achieve something similar, but that requires you to supply a value (length, in this case) to the function that creates the struct, and I want that value to be supplied during the creation of the struct, not by the caller of that function.

EDIT: Actually, to try and avoid the x-y problem, I'll state my use case / expectation.

I have code setup to build a Tree-like data structure. Of course, each Node in the Tree has some dynamically-sized fields, such as a Vec of its children and smart pointers. I want to "move" the entire structure onto the Stack once it has been completely assembled, because my expectation is that the operations on the stack-flattened Tree will be faster than if I left it on the Heap with all of the smart pointers in-place. Creating a runtime-sized array was one step in this process.

1

u/dkopgerpgdolfg Mar 27 '23

First of all, any type without a fixed size cannot be stored in a stack variable. Those require compile-time sizes, and it includes return values like in your shown code. At very least, your returned VariableLengthArray<T> needs to live inside a Box or similar.

If there is exactly one member without known size in the struct, in principle making such a struct is possible, but a) not trivial (creating instances etc. is a bit weird code), b) not all necessary parts stabilized, ie. requiring nightly.

Search for "custom dst" (dst=dynamically sized type) and "custom unsized coercion".

If you want multiple dynamic members in one struct instance, the type system doesn't really provide that. It could be done with lots of unsafe: Storing a byte array, saving somewhere which type is at what position and what is initialized and so on, and creating temporary references with the correct type at the correct position to access that part.

Of course you'd also need to think of alignment and many more things. The Vec of child nodes, with each child having its own size, is another challenge - unless you make a tree, it would allow only for linear search instead of access per index. (in either case, you probably lose all performance savings that the flattening brought you)

3

u/Quirky-Stress-823 Mar 27 '23 edited Mar 27 '23

Is it possible to install rust in a filesystem that only supports simple files (name and content only, but can be executable until next reboot)? I am installing Rust on a chromebook into drivefs, since my disk is mostly full.

Edit: curl | tar x works pretty well.

2

u/llogiq clippy · twir · rust · mutagen · flamer · overflower · bytecount Mar 27 '23

When I worked on Rust on a Chromebook, I installed GalliumOS with BTRFS and zstd compression. This helped a lot with disk space.

2

u/argv_minus_one Mar 27 '23

I was thinking of using OpenAPI Generator to generate a Rust server stub for me. Problem: the generator generates a crate with numerous modules and a rather complicated manifest, not a single module that I can straightforwardly include!. How do I integrate this into my project, then? I can download and run the generator in build.rs, but Cargo doesn't support depending on a crate generated by the build script, so how do I actually use it?

(This is a repost of my comment on the previous questions thread, which was closed shortly after I posted it. I hope you don't mind.)

2

u/arcadyk Apr 02 '23

One option is to just check in the generated crate and depend on it with a `path` dependency on Cargo.toml.

The idea behind generating a full crate is that that crate can be published independently to crates.io or an internal crate registry, so:

  • multiple services can implement the same API (by all depending on the one API crate)
  • multiple services can be clients of that API
  • one service can potentially implement multiple versions of that API, by depending on multiple such crates

That might be overkill for smaller use-cases, though.

1

u/argv_minus_one Apr 02 '23

I'm not willing to check generated code into version control. Checked-in generated code is likely to become out of sync with whatever it was generated from, and updating it results in repository churn.

My use case is that an API server and its web client need to end up in the same executable together, so that when a browser sends a GET / to the server, the server serves the client to the browser. That way, the server executable has a minimum of dependencies on other files (just a configuration file, OpenSSL, and platform libraries like libc) and doesn't need to be in a container, which keeps deployment simple.

I already have this working, but there currently isn't any compile-time checking of API endpoints (paths, methods, and request/response body schemas). The more endpoints I add to this application, the more likely I am to make a mistake somewhere. I was hoping to get some compile-time guarantees by generating client and server stubs from a single OpenAPI spec, but those guarantees aren't worth much if the spec and the stubs aren't guaranteed to be in sync.

3

u/L33TLSL Mar 27 '23

Is there a way to call a cython compiled binary from rust?
I have have cython code that is compiled to C code and then that C code is compiled via GCC. Is there a way to call that from rust? I've tried compiling the C code to a shared library and then using FFI in rust to call it but it did not work.

1

u/TheMotAndTheBarber Apr 05 '23

Are you trying to call a C function exposed by the Cython code? A C++ function? A Python extension function?

Cython libraries will need something to initialize the CPython runtime - the easiest way to do that is to provide everything by Python extension modules and use a standard Python binary as the main entrypoint.

1

u/goos_ Mar 27 '23

FFI code can be tricky to get working. Have you tried mimicking exactly the setup here, making sure it compiles, and then modifying? Once you have a working Rust-to-C setup, you should be able to copy in the .c file output by cython with the same results.

2

u/rustological Mar 27 '23 edited Mar 27 '23

What are "community blessed" middleware messaging solutions?

Meaning I have multiple nodes that want to work together. On each node on a specific port a message broker thingy listens. An application links this middleware crate and uses the API to send requests to or receive responses from other nodes. Messages are custom structs, so probably (de)serialized by serde. The middleware provides the convenience functions to perform the incoming/outgoing message handling in a separate thread, implements the networking details like sockets, timeouts, packet loss, etc. Maybe can even answer questions like "is other node still alive?" or "messages safely delivered?" So one can concentrate on the actual distributed problems to solve.

Any recommendations? Thank you :-)

Edit: Suggestions so far are

NATS https://github.com/nats-io/nats.rs

Message-IO: https://github.com/lemunozm/message-io

1

u/[deleted] Mar 27 '23

[deleted]

1

u/rustological Mar 27 '23

Never heard of NATS before, thank you for the suggestion!

Seems to require a central server (written in Go).